一分钟趣味学习法常⽤数据结构的英⽂说明
Arrays are the most commonly ud data structure. Arrays are of fixed size, indexed, and all containing elements are of the same type (i.e. a homogenous collection). For example, storing employee details just read from the databa as EmployeeDetail[ ], converting and storing a string as a by te array for further manipulation or processing, etc. Wrap an array in a class to protect it from being inadvertently altered. This would be true for other data structures as well.
Lists are known as arrays that can grow. The data structures are generally backed by a fixed sized array and they re-size themlves as necessary. A list can have duplicate elements. For example, adding new line items to an order that stores its line items as a list, removing all expired products from a list of products, etc. Initialize them with an appropriate initial size to minimize the number of re-sizes
Sets are like lists but they do not hold duplicate elements. Sets can be ud when you have a requirement to store unique elements.
Stacks allow access to only one data item, which is the last item inrted (i.e. Last In First Out - LIFO). If you remove this item, you can access the next-to-last item inrted, and so on. The LIFO is achieved through restricting access only via methods like peek( ), push( ), and pop( ). This is uful in many prog
raming situations like parsing mathematical expressions like (4+2) * 3, storing methods and exceptions in the order they occur, checking your source code to e if the brackets and braces are balanced properly, etc.
eland
dpmiThe LIFO access mechanism ud by a stack has many practical us. For example, Evaluation of expressions / syntax Parsing, validating and parsing XML, undo quence in a text editor, pages visited history in a web browr, etc.
Queues are somewhat like a stack, except that in a queue the first item inrted is the first to be removed (i.e. First In First Out – FIFO). The FIFO is achieved through restricting access only via methods like peek( ), offer( ), and poll( ). For example, waiting in a line for a bus, a queue at the bank or super market teller, etc.
LinkedLists are data structures made of nodes, where each node contains data and a reference to the next node, and possibly to the previous node as well for a doubly linked list. For example, a stack or queue can be implemented with a linked list or a doubly linked list becau you can inrt and delete at both ends. There would also be other situations where data will be frequently inrted and deleted from the middle. The Apache library provides a TreeList implementation, which is a good
replacement for a LinkedList as it performs much better than a LinkedList at the expen of using a little more memory. This means a LinkedList is rarely a good choice of implementation.
ArrayList is a good general purpo list implementation. An ArrayList is faster than a TreeList for most operations except inrting and removing in the middle of the list. A TreeList implementation utilizes a tree structure internally to ensure that all inrtions and removals are O(log n). This provides much faster performance than both an ArrayList and a LinkedList where elements are inrted and removed repeatedly from anywhere in the list.
HashMaps are amortized constant-time access data structures that map keys to values. This data structure is backed by an array. It us a hashing functionality to identify a location, and some type of collision detection algorithm is ud to handle values that hash to the same location. For example, storing employee records with employee number as the key, storing name/value pairs read from a properties file, etc. Initialize them with an appropriate initial size to minimize the number of re-sizes.
Trees are the data structures that contain nodes with optional data elements and one or more child e
lements, and possibly each child element references the parent element to reprent a hierarchical or ordered t of data elements. For example, a hierarchy of employees in an organization, storing the XML data as a hierarchy, etc. If every node in a tree can have utmost 2 children, the tree is called a binary tree. The binary trees are very common becau the shape of a binary tree makes it easy to arch and inrt data. The edges in a tree reprent quick ways to navigate from node to node.
Java does not provide an implementation for this but it can be easily implemented as shown below. Just make a class Node with an ArrayList holding links to other Nodes.
package bigo;
import java.util.ArrayList;
import java.util.List;
public class Node {
private String name;
private List<node> children = new ArrayList<node>( );
private Node parent;
public Node getParent( ) {
return parent;
}
public void tParent(Node parent) {
this.parent = parent;
}
public Node(String name) {
this.name = name;
}
public void addChild(Node child) {
children.add(child);
}
public void removeChild(Node child) {
}
public String toString( ) {
return name;
}
}
Graphs are data structures that reprent arbitrary relationships between members of any data ts that can be reprented as networks of nodes and edges. A tree structure is esntially a more organized graph where each node can only have one parent node. Unlike a tree, a graph's shape is
dictated by a physical or abstract problem. For example, nodes (or vertices) in a graph may reprent cities, while edges may reprent airline flight routes between the cities.
make a Java graph class, you will have to work out the way in which the information can be stored and accesd. A graph will be using some of the data structures mentioned above. The Java API does not provide an implementation for this. But there are a number of third party libraries like JUNG, JGraphT, and JDSL (does not em to support generics).The book "Core Java Career Esntials" covers working examples using Java.
Q. What do you know about the big-O notation and can you give some examples with respect to different data structures?
A. The Big-O notation simply describes how well an algorithm scales or performs in the worst ca scenario as the number of elements in a data structure increas. The Big-O notation can also be ud to describe other behavior such as memory consumption. At times you may need to choo a slower algorithm becau it also consumes less memory. Big-o notation can give a good indication about performance for large amounts of data, but the only real way to know for sure is to have a performance benchmark with large data ts to take into account things that are not considered in Bi
g-O notation like paging as virtual memory usage grows, etc. Although benchmarks are better, they aren't feasible during the design process, so Big-O complexity analysis is the choice.
The algorithms ud by various data structures for different operations like arch, inrt and delete fall into the following performance groups like constant-time O(1), linear O(n), logarithmic O (log n), exponential O (c to the power n), polynomial O(n to the power c), quadratic O (n to the power 2) and factorial O (n!) where n is the number of elements in the data structure. It is generally a tradeoff between performance and memory usage. Here are some examples.
Example 1: Finding an element in a HashMap is usually a constant-time, which is O(1) . This is a constant time becau a hashing function is ud to find an element, and computing a hash value does not depend on the number of elements in the HashMap.
Example 2: Linear arch of an array, list, and LinkedList is linear, which is O(n). This is linear becau you will have to arch the entire list. This means that if a list is twice as big, arching it will take twice as long.
Example 3: An algorithm that needs to compare every element in an array to sort the array has polynomial complexity, which is O (n2). A nested for loop is O (n2). An example is shown under sorti
ng algorithms.
Example 4: Binary arch of a sorted array or ArrayList is logarithmic, which is O(log n). Searching an element in a LinkedList normally requires O(n). This is one of the disadvantages of LinkedList over the other data structures like an ArrayList or array offering a O (log n) performance, which offers better performance than
O(n) as the number of elements increas. A logarithmic running times mean, if 10 items take at most x amount of time, 100 items will take say at most 2x amount of time, and 10,000 items will take at most 4x. If you plot this on a graph, the time decreas as n (i.e. number of items) increas.
Q. What can you tell about the performance of a HashMap compared to a TreeMap? Which one would you prefer?
A. A balanced tree does have O (log n) performance. The TreeMap class in Java maintains key/value objects in a sorted order by using a red-black tree. A red-black tree is a balanced binary tree. Keeping the binary tree balanced ensures the fast inrtion, removal, and look-up time of O (log n). This is not as fast as a HashMap, which is O(1) , but the TreeMap has the advantage of that the keys are in sorted order which opens up a lot of other capabilities.
Q. Which one to choo?
The decision as to using an unordered collection like a HashSet or HasMap versus using a sorted data structure like a TreeSet or TreeMap depends mainly on
the usage pattern, and to some extent on the data size and the environment you run it on. The practical reason for keeping the elements in sorted order is for frequent and faster retrieval of sorted data if the inrts and updates are frequent. If the need for a sorted result is infrequent like prior to producing a report or running a batch process, then maintaining an unordered collection and sorting them only when it is really required with Collections.sort(...) could sometimes be more efficient than maintaining the ordered elements. This is only an opinion, and no one can offer you a correct answer. Even the complexity theories like Big-O
notation like O(n) assume possibly large values of n. In practice, a O(n) algorithm can be much fast
er than a O(log n) algorithm, provided the data t that is handled is sufficiently small. One algorithm might perform better on an AMD processor than on an Intel. If your system is t up to swap, disk performance need to be considered. The only way to confirm the efficient usage is to test and measure both performance and memory usage with the right data size. Measure both the approaches on your chon hardware to determine, which is more appropriate.
Q. What is the tradeoff between using an unordered array versus an ordered array?
A. The major advantage of an ordered array is that the arch times are much faster with O (log n) than an unordered array, which is O (n) for larger values of n. The disadvantage of an ordered array is that the inrtion takes longer (i.e. O (n) ) becau all the data with higher values need to be moved to make room. The inrtion for an unordered array takes constant time of O(1). This means, it does not depend on the number of elements. The following code snippet demonstrates adding an element to both ordered and unordered array.
Inrting an element into an unsorted array
package bigo;
exceptionallyimport java.util.Arrays;
public class InrtingElementsToArray {
伤感英文歌曲public static void inrtUnsortedArray(String toInrt) {
String[ ] unsortedArray = { "A", "D", "C"};
String[ ] newUnsortedArray = new String[unsortedArray.length + 1];
System.arraycopy(unsortedArray, 0, newUnsortedArray, 0, 3);
newUnsortedArray[newUnsortedArray.length - 1] = toInrt;
System.out.String(newUnsortedArray));
}
public static void main(String[ ] args) {
inrtUnsortedArray("B");
}
}
Inrting an element into a sorted array
package bigo;
import java.util.Arrays;
public class InrtingElementsToArray {
public static void inrtSortedArray(String toInrt) {词根词缀
String[ ] sortedArray = { "A", "C", "D"};
封闭式提问/*
* Binary arch returns the index of the arch item
* if found, otherwi returns the minus inrtion point. This example
* returns index = -2, which means the elemnt is not found and needs to
* be inrted as a cond element.
*/
int index = Arrays.binarySearch(sortedArray, toInrt);
if(index < 0) { // not found.
// array indices are zero bad. -2 index means inrtion point of
// -(-2)-1 = 1, so inrtIndex = 1
int inrtIndex = -index - 1;
String[ ] newSortedArray = new String[sortedArray.length + 1];哈利波特下载
System.arraycopy(sortedArray, 0, newSortedArray, 0, inrtIndex);
System.arraycopy(sortedArray, inrtIndex,
newSortedArray, inrtIndex + 1, sortedArray.length - inrtIndex);
newSortedArray[inrtIndex] = toInrt;
System.out.String(newSortedArray));
}
}
public static void main(String[ ] args) {
inrtSortedArray("B");
}
}
So the decision depends on the usage pattern. Ask yourlf the following questions. Do I have more inrts/deletes or arch? What is the maximum number of elements likely to be stored in this array? How often do I need the sorted data? And finally what does my performance benchmark show?
Q. How do you get an immutable collection?
A. This functionality is provided by the Collections class, which is a wrapper implementation using the decorator design pattern.
public class ReadOnlyExample {
西安雅思培训
public static void main(String args[ ]) {
Set<string> t= new HashSet<string>( );
t.add("Java");
t.add("JEE");
t.add("Spring");
t.add("Hibernate");
2013四级考试时间t= Collections.unmodifiableSet(t);
t.add("Ajax"); // not allowed.
}
}
Q. What does the following code do? Can the LinkedHashSet be replaced with a HashSet?
import java.util.ArrayList;
import java.util.LinkedHashSet;
import java.util.List;
public class CollectionFunction {
public<e> List<e> function (List <e> list) {
return new ArrayList<e>(new LinkedHashSet<e>(list));
}
}
A. The above code removes duplicates from a supplied list by passing it through an implementation
of a Set interface. In this ca, a LinkedHashSet is ud to honor the ordering by implementing a SortedSet interface. If ordering is not required, the LinkedHashSet can be replaced with a HashSet.
Q. What are some of the best practices relating to the Java Collection framework?
A. Choo the right type of data structure bad on usage patterns like fixed size or required to grow, duplicates allowed or not, ordering is required to be maintained or not, traversal is forward only or bi-directional, inrts at the end only or any arbitrary position, more inrts or more reads, concurrently accesd or not, modification is allowed or not, homogeneous or heterogeneous collection, etc. Also, keep multi-threading, atomicity, memory usage and performance considerations discusd earlier in mind.
Don't assume that your collection is always going to be small as it can potentially grow bigger with time. So your collection should scale well.
Program in terms of interface not implementation: For example, you might decide a LinkedList is the best choice for some application, but then later decide ArrayList might be a better choice for performance reason.
Bad: ArrayListlist = new ArrayList(100);
Good:
// program to interface so that the implementation can change
Listlist = new ArrayList(100);
Listlist2 = new LinkedList(100);
Return zero length collections or arrays as oppod to returning a null in the context of the fetched list is actually empty. Returning a null instead of a zero length collection is more error prone, since the programmer writing the calling method might forget to handle a return value of null.
List emptyList = ptyList( );
Set emptySet = ptySet( );
U generics for type safety, readability, and robustness.
Encapsulate collections: In general, collections are not immutable objects. So care should be taken not to unintentionally expo the collection fields to the caller. The caller may not perform any necessary validation.