Search Results

Search found 32435 results on 1298 pages for 'ordered list'.

Page 212/1298 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • jQuery code working in Safari and Chrome but not Firefox

    - by Chris Armstrong
    I'm got a site that has a long list of tweets, and as you scroll down the right column follows you down, showing stats on the tweets. (See it in action at http://www.grapevinegame.com . Click 'memorise', then 'skip' to get to the list page. Works in Safari and Chrome). I'm using jQuery to update the top-margin of the right column, increasing it as I scroll down. It seems to be working fine in webkit-based browsers, but doesn't budge in Firefox. Heres the code, the right column element is a div with id = "distance". // Listen for scroll function $(window).scroll(function () { // Calculating the position of the scrollbar var doc = $("body"), scrollPosition = $("body").scrollTop(), pageSize = $("body").height(), windowSize = $(window).height(), fullScroll = (pageSize) - windowSize; percentageScrolled = (scrollPosition / fullScroll); var entries = $("#whispers-list > li").length; // Set position of distance counter $('div#distance').css('margin-top', ($("#whispers-list").height()+$("#latest-whisper").height()+33)*percentageScrolled); // Update distance counter $('#distance-travelled').text(Math.round(distanceTravelled*(1-percentageScrolled))); $('#whispers-list li').each(function(index) { //highlight adjacent whispers if ($('#whispers-list li:nth-child('+(index+1)+')').offset().top >= $('#distance').offset().top && $('#whispers-list li:nth-child('+(index+1)+')').offset().top <= $('#distance').offset().top + $('#distance').height()) { // alert("yup"); $('#whispers-list li:nth-child('+(index+1)+') ul').fadeTo(1, 1); } else { $('#whispers-list li:nth-child('+(index+1)+') ul').fadeTo(1, 0.5); } }); }); Appreciate any help or advice!

    Read the article

  • Dynamic ExpandableListView

    - by JLB
    Can anyone tell me how I can Dynamically add groups & children to an ExpandableListView. The purpose is to create a type of task list that I can add new items/groups to. I can populate the view with pre-filled arrays but of course can't add to them to populate the list further. The other method i tried was using the SimpleExpandableListAdapter. Using this i am able to add groups from a List but when it comes to adding children i can only add 1 item per group. public List<Map<String, String>> groupData = new ArrayList<Map<String, String>>(); public List<List<Map<String, String>>> childData = new ArrayList<List<Map<String, String>>>(); public void addGroup(String group) { Map curGroupMap = new HashMap(); groupData.add(curGroupMap); curGroupMap.put(NAME, group); //Add an empty child or else the app will crash when group is expanded. List<Map<String, String>> children = new ArrayList<Map<String, String>>(); Map<String, String> curChildMap = new HashMap<String, String>(); children.add(curChildMap); curChildMap.put(NAME, "EMPTY"); childData.add(children); updateAdapter(); } public void addChild(String child) { List children = new ArrayList(); Map curChildMap = new HashMap(); children.add(curChildMap); curChildMap.put(NAME, child); curChildMap.put(IS_EVEN, "This child is even"); childData.add(activeGroup, children); updateAdapter(); }

    Read the article

  • how to determine if a character vector is a valid numeric or integer vector

    - by Andrew Barr
    I am trying to turn a nested list structure into a dataframe. The list looks similar to the following (it is serialized data from parsed JSON read in using the httr package). myList <- list(object1 = list(w=1, x=list(y=0.1, z="cat")), object2 = list(w=2, x=list(y=0.2, z="dog"))) unlist(myList) does a great job of recursively flattening the list, and I can then use lapply to flatten all the objects nicely. flatList <- lapply(myList, FUN= function(object) {return(as.data.frame(rbind(unlist(object))))}) And finally, I can button it up using plyr::rbind.fill myDF <- do.call(plyr::rbind.fill, flatList) str(myDF) #'data.frame': 2 obs. of 3 variables: #$ w : Factor w/ 2 levels "1","2": 1 2 #$ x.y: Factor w/ 2 levels "0.1","0.2": 1 2 #$ x.z: Factor w/ 2 levels "cat","dog": 1 2 The problem is that w and x.y are now being interpreted as character vectors, which by default get parsed as factors in the dataframe. I believe that unlist() is the culprit, but I can't figure out another way to recursively flatten the list structure. A workaround would be to post-process the dataframe, and assign data types then. What is the best way to determine if a vector is a valid numeric or integer vector?

    Read the article

  • lock shared data using c#

    - by menacheb
    Hi, I have a program (C#) with a list of tests to do. Also, I have two thread. one to add task into the list, and one to read and remove from it the performed tasks. I'm using the 'lock' function each time one of the threads want to access to the list. Another thing I want to do is, if the list is empty, the thread who need to read from the list will sleep. and wake up when the first thread add a task to the list. Here is the code I wrote: ... List<String> myList = new List(); Thread writeThread, readThread; writeThread = new Thread(write); writeThread.Start(); readThraed = new Thread(read); readThread.Start(); ... private void write() { while(...) { ... lock(myList) { myList.Add(...); } ... if (!readThread.IsAlive) { readThraed = new Thread(read); readThread.Start(); } ... } ... } private void read() { bool noMoreTasks = false; while (!noMoreTasks) { lock (MyList)//syncronize with the ADD func. { if (dataFromClientList.Count > 0) { String task = myList.First(); myList.Remove(task); } else { noMoreTasks = true; } } ... } readThread.Abort(); } Apparently I did it wrong, and it's not performed as expected (The readTread does't read from the list). Does anyone know what is my problem, and how to make it right? Many thanks,

    Read the article

  • how to execute two thread simultaneously in java swing?

    - by jcrshankar
    My aim is to select all the files named with MANI.txt which is present in their respective folders and then load path of the MANI.txt files different location in table. After I load the path in the table,I used to select needed path and modifiying those. To load the MANI.txt files taking more time,because it may present more than 30 times in my workspace or etc. until load the files I want to give alarm to the user with help of ProgessBar.Once the list size has been populated I need to disable ProgressBar. Could anyone please help me out on this? import java.awt.*; import javax.swing.*; import javax.swing.table.*; import java.awt.event.*; import java.io.BufferedReader; import java.io.File; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; public class JTableHeaderCheckBox extends JFrame implements ActionListener { Object colNames[] = {"", "Path"}; Object[][] data = {}; DefaultTableModel dtm; JTable table; JButton but; java.util.List list; public void buildGUI() { dtm = new DefaultTableModel(data,colNames); table = new JTable(dtm); table.setAutoResizeMode(JTable.AUTO_RESIZE_OFF); int vColIndex = 0; TableColumn col = table.getColumnModel().getColumn(vColIndex); int width = 10; col.setPreferredWidth(width); int vColIndex1 = 1; TableColumn col1 = table.getColumnModel().getColumn(vColIndex1); int width1 = 500; col1.setPreferredWidth(width1); JFileChooser chooser = new JFileChooser(); //chooser.setCurrentDirectory(new java.io.File(".")); chooser.setDialogTitle("Choose workSpace Path"); chooser.setFileSelectionMode(JFileChooser.DIRECTORIES_ONLY); chooser.setAcceptAllFileFilterUsed(false); if (chooser.showOpenDialog(this) == JFileChooser.APPROVE_OPTION){ System.out.println("getCurrentDirectory(): " + chooser.getCurrentDirectory()); System.out.println("getSelectedFile() : " + chooser.getSelectedFile().getAbsolutePath()); } String path= chooser.getSelectedFile().getAbsolutePath(); File folder = new File(path); Here I need progress bar GatheringFiles ob = new GatheringFiles(); list=ob.returnlist(folder); for(int x = 0; x < list.size(); x++) { dtm.addRow(new Object[]{new Boolean(false),list.get(x).toString()}); } JPanel pan = new JPanel(); JScrollPane sp = new JScrollPane(table); TableColumn tc = table.getColumnModel().getColumn(0); tc.setCellEditor(table.getDefaultEditor(Boolean.class)); tc.setCellRenderer(table.getDefaultRenderer(Boolean.class)); tc.setHeaderRenderer(new CheckBoxHeader(new MyItemListener())); but = new JButton("REMOVE"); JFrame f = new JFrame(); pan.add(sp); but.move(650, 50); but.addActionListener(this); pan.add(but); f.add(pan); f.setSize(700, 100); f.pack(); f.setLocationRelativeTo(null); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.setVisible(true); } class MyItemListener implements ItemListener { public void itemStateChanged(ItemEvent e) { Object source = e.getSource(); if (source instanceof AbstractButton == false) return; boolean checked = e.getStateChange() == ItemEvent.SELECTED; for(int x = 0, y = table.getRowCount(); x < y; x++) { table.setValueAt(new Boolean(checked),x,0); } } } public static void main (String[] args) { SwingUtilities.invokeLater(new Runnable(){ public void run(){ new JTableHeaderCheckBox().buildGUI(); } }); } public void actionPerformed(ActionEvent e) { // TODO Auto-generated method stub if(e.getSource()==but) { System.err.println("table.getRowCount()"+table.getRowCount()); for(int x = 0, y = table.getRowCount(); x < y; x++) { if("true".equals(table.getValueAt(x, 0).toString())) { System.err.println(table.getValueAt(x, 0)); System.err.println(list.get(x).toString()); delete(list.get(x).toString()); } } } } public void delete(String a) { String delete = "C:"; System.err.println(a); try { File inFile = new File(a); if (!inFile.isFile()) { System.out.println("Parameter is not an existing file"); return; } //Construct the new file that will later be renamed to the original filename. File tempFile = new File(inFile.getAbsolutePath() + ".tmp"); BufferedReader br = new BufferedReader(new FileReader(inFile)); PrintWriter pw = new PrintWriter(new FileWriter(tempFile)); String line = null; //Read from the original file and write to the new //unless content matches data to be removed. while ((line = br.readLine()) != null) { System.err.println(line); line = line.replace(delete, " "); pw.println(line); pw.flush(); } pw.close(); br.close(); //Delete the original file if (!inFile.delete()) { System.out.println("Could not delete file"); return; } //Rename the new file to the filename the original file had. if (!tempFile.renameTo(inFile)) System.out.println("Could not rename file"); } catch (FileNotFoundException ex) { ex.printStackTrace(); } catch (IOException ex) { ex.printStackTrace(); } } } class CheckBoxHeader extends JCheckBox implements TableCellRenderer, MouseListener { protected CheckBoxHeader rendererComponent; protected int column; protected boolean mousePressed = false; public CheckBoxHeader(ItemListener itemListener) { rendererComponent = this; rendererComponent.addItemListener(itemListener); } public Component getTableCellRendererComponent( JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { if (table != null) { JTableHeader header = table.getTableHeader(); if (header != null) { rendererComponent.setForeground(header.getForeground()); rendererComponent.setBackground(header.getBackground()); rendererComponent.setFont(header.getFont()); header.addMouseListener(rendererComponent); } } setColumn(column); rendererComponent.setText("Check All"); setBorder(UIManager.getBorder("TableHeader.cellBorder")); return rendererComponent; } protected void setColumn(int column) { this.column = column; } public int getColumn() { return column; } protected void handleClickEvent(MouseEvent e) { if (mousePressed) { mousePressed=false; JTableHeader header = (JTableHeader)(e.getSource()); JTable tableView = header.getTable(); TableColumnModel columnModel = tableView.getColumnModel(); int viewColumn = columnModel.getColumnIndexAtX(e.getX()); int column = tableView.convertColumnIndexToModel(viewColumn); if (viewColumn == this.column && e.getClickCount() == 1 && column != -1) { doClick(); } } } public void mouseClicked(MouseEvent e) { handleClickEvent(e); ((JTableHeader)e.getSource()).repaint(); } public void mousePressed(MouseEvent e) { mousePressed = true; } public void mouseReleased(MouseEvent e) { } public void mouseEntered(MouseEvent e) { } public void mouseExited(MouseEvent e) { } } ****************** import java.io.File; import java.util.*; public class GatheringFiles { public static List returnlist(File folder) { List<File> list = new ArrayList<File>(); List<File> list1 = new ArrayList<File>(); getFiles(folder, list); return list; } private static void getFiles(File folder, List<File> list) { folder.setReadOnly(); File[] files = folder.listFiles(); for(int j = 0; j < files.length; j++) { if( "MANI.txt".equals(files[j].getName())) { list.add(files[j]); } if(files[j].isDirectory()) getFiles(files[j], list); } } }

    Read the article

  • Performance of VIEW vs. SQL statement

    - by Matt W.
    I have a query that goes something like the following: select <field list> from <table list> where <join conditions> and <condition list> and PrimaryKey in (select PrimaryKey from <table list> where <join list> and <condition list>) and PrimaryKey not in (select PrimaryKey from <table list> where <join list> and <condition list>) The sub-select queries both have multiple sub-select queries of their own that I'm not showing so as not to clutter the statement. One of the developers on my team thinks a view would be better. I disagree in that the SQL statement uses variables passed in by the program (based on the user's login Id). Are there any hard and fast rules on when a view should be used vs. using a SQL statement? What kind of performance gain issues are there in running SQL statements on their own against regular tables vs. against views. (Note that all the joins / where conditions are against indexed columns, so that shouldn't be an issue.) EDIT for clarification... Here's the query I'm working with: select obj_id from object where obj_id in( (select distinct(sec_id) from security where sec_type_id = 494 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) and sec_obj_id in ( select obj_id from object where obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) ) ) ) or (obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) and obj_id not in (select sec_obj_id from security where sec_type_id = 494) )

    Read the article

  • Behavior of <- NULL on lists versus data.frames for removing data

    - by Ananda Mahto
    Many R users eventually figure out lots of ways to remove elements from their data. One way is to use NULL, particularly when you want to do something like drop a column from a data.frame or drop an element from a list. Eventually, a user comes across a situation where they want to drop several columns from a data.frame at once, and they hit upon <- list(NULL) as the solution (since using <- NULL will result in an error). A data.frame is a special type of list, so it wouldn't be too tough to imagine that the approaches for removing items from a list should be the same as removing columns from a data.frame. However, they produce different results, as can be seen in the example below. ## Make some small data--two data.frames and two lists cars1 <- cars2 <- head(mtcars)[1:4] cars3 <- cars4 <- as.list(cars2) ## Demonstration that the `list(NULL)` approach works cars1[c("mpg", "cyl")] <- list(NULL) cars1 # disp hp # Mazda RX4 160 110 # Mazda RX4 Wag 160 110 # Datsun 710 108 93 # Hornet 4 Drive 258 110 # Hornet Sportabout 360 175 # Valiant 225 105 ## Demonstration that simply using `NULL` does not work cars2[c("mpg", "cyl")] <- NULL # Error in `[<-.data.frame`(`*tmp*`, c("mpg", "cyl"), value = NULL) : # replacement has 0 items, need 12 Switch to applying the same concept to a list, and compare the difference in behavior. ## Does not fully drop the items, but sets them to `NULL` cars3[c("mpg", "cyl")] <- list(NULL) # $mpg # NULL # # $cyl # NULL # # $disp # [1] 160 160 108 258 360 225 # # $hp # [1] 110 110 93 110 175 105 ## *Does* drop the `list` items while this would ## have produced an error with a `data.frame` cars4[c("mpg", "cyl")] <- NULL # $disp # [1] 160 160 108 258 360 225 # # $hp # [1] 110 110 93 110 175 105 The main questions I have are, if a data.frame is a list, why does it behave so differently in this scenario? Is there a foolproof way of knowing when an element will be dropped, when it will produce an error, and when it will simply be given a NULL value? Or do we depend on trial-and-error for this?

    Read the article

  • Strongly typed dynamic Linq sorting

    - by David
    I'm trying to build some code for dynamically sorting a Linq IQueryable<. The obvious way is here, which sorts a list using a string for the field name http://dvanderboom.wordpress.com/2008/12/19/dynamically-composing-linq-orderby-clauses/ However I want one change - compile time checking of field names, and the ability to use refactoring/Find All References to support later maintenance. That means I want to define the fields as f=f.Name, instead of as strings. For my specific use I want to encapsulate some code that would decide which of a list of named "OrderBy" expressions should be used based on user input, without writing different code every time. Here is the gist of what I've written: var list = from m Movies select m; // Get our list var sorter = list.GetSorter(...); // Pass in some global user settings object sorter.AddSort("NAME", m=m.Name); sorter.AddSort("YEAR", m=m.Year).ThenBy(m=m.Year); list = sorter.GetSortedList(); ... public class Sorter ... public static Sorter GetSorter(this IQueryable source, ...) The GetSortedList function determines which of the named sorts to use, which results in a List object, where each FieldData contains the MethodInfo and Type values of the fields passed in AddSort: public SorterItem AddSort(Func field) { MethodInfo ... = field.Method; Type ... = TypeOf(TKey); // Create item, add item to diction, add fields to item's List // The item has the ThenBy method, which just adds another field to the List } I'm not sure if there is a way to store the entire field object in a way that would allow it be returned later (it would be impossible to cast, since it is a generic type) Is there a way I could adapt the sample code, or come up with entirely new code, in order to sort using strongly typed field names after they have been stored in some container and retrieved (losing any generic type casting)

    Read the article

  • javax.xml.bind.MarshalException

    - by sandeep
    Hi, I am getting javax.xml.bind.MarshalException error. I am sending List from my webservice to the backingbean and I have this error. Here is my code: Backing bean @WebServiceRef(wsdlLocation = "http://localhost:26565/Login_webserviceService/Login_webservice?WSDL") public String login() { System.out.println("Login Phase entered"); int result = 0; List list; List finalList = null; try { Weblogin.LoginWebserviceService service = new Weblogin.LoginWebserviceService(); Weblogin.LoginWebservice port = service.getLoginWebservicePort(); result = port.login(voterID, password); Weblogin.LoginWebservice port1 = service.getLoginWebservicePort(); list = port1.candDetails(1); finalList = list; this.setList(finalList); } catch (Exception e) { e.printStackTrace(); } if (result == 1) return "polling"; else return "login"; } Webservice public List candDetails(int pollEvent) { List resultList = null; List finalList = null; try { if (pollEvent == 1) { resultList = em.createNamedQuery("Cantable.findAll").getResultList(); finalList = resultList; } } catch (Exception e) { e.printStackTrace(); } return resultList; }

    Read the article

  • Compare Properties automatically

    - by juergen d
    I want to get the names of all properties that changed for matching objects. I have these (simplified) classes: public enum PersonType { Student, Professor, Employee } class Person { public string Name { get; set; } public PersonType Type { get; set; } } class Student : Person { public string MatriculationNumber { get; set; } } class Subject { public string Name { get; set; } public int WeeklyHours { get; set; } } class Professor : Person { public List<Subject> Subjects { get; set; } } Now I want to get the objects where the Property values differ: List<Person> oldPersonList = ... List<Person> newPersonList = ... List<Difference> = GetDifferences(oldPersonList, newPersonList); public List<Difference> GetDifferences(List<Person> oldP, List<Person> newP) { //how to check the properties without casting and checking //for each type and individual property?? //can this be done with Reflection even in Lists?? } In the end I would like to have a list of Differences like this: class Difference { public List<string> ChangedProperties { get; set; } public Person NewPerson { get; set; } public Person OldPerson { get; set; } } The ChangedProperties should contain the name of the changed properties.

    Read the article

  • T4MVC Optional Parameter Inferred From Current Context

    - by Itakou
    I have read the other post about this at T4MVC OptionalParameter values implied from current context and I am using the latest T4MVC (2.11.1) which is suppose to have the fix in. I even checked checked to make sure that it's there -- and it is. I am still getting the optional parameters filled in based on the current context. For example: Let's say I have a list that is by default ordered by a person's last name. I have the option to order by first name instead with the URL http://localhost/list/stuff?orderby=firstname When I am in that page, I want to go back to order by first name with the code: @Html.ActionLink("order by last name", MVC.List.Stuff(null)) the link I wanted was simply http://localhost/list/stuff without any parameters to keep the URL simple and short - invoking default behaviors within the action. But instead the orderby is kept and the url is still http://localhost/list/stuff?orderby=firstname Any help would be great. I know that in the most general cases, this does remove the query parameter - maybe I do have a specific case where it was not removed. I find that it only happens when I have the URL inside a page that I included with RenderPartial. My actual code is <li>@Html.ActionLink("Recently Updated", MVC.Network.Ticket.List(Model.UI.AccountId, "LastModifiedDate", null, null, null, null, null))</li> <li>@Html.ActionLink("Recently Created", MVC.Network.Ticket.List(Model.UI.AccountId, "CreatedDate", null, null, null, null, null))</li> <li>@Html.ActionLink("Most Severe", MVC.Network.Ticket.List(Model.UI.AccountId, "MostSevere", null, null, null, null, null))</li> <li>@Html.ActionLink("Previously Closed", MVC.Network.Ticket.List(Model.UI.AccountId, "LastModifiedDate", null, "Closed", null, null, null))</li> the problem happens when someone clicks Previously Closed and and go to ?status=closed. When they click Recently Updated, which I want to the ?status so that it shows the active ones, the ?status=closed stays. Any insight would be greatly appreciated.

    Read the article

  • Extensions methods and forward compatibilty of source code.

    - by TcKs
    Hi, I would like solve the problem (now hypothetical but propably real in future) of using extension methods and maginification of class interface in future development. Example: /* the code written in 17. March 2010 */ public class MySpecialList : IList<MySpecialClass> { // ... implementation } // ... somewhere elsewhere ... MySpecialList list = GetMySpecialList(); // returns list of special classes var reversedList = list.Reverse().ToList(); // .Reverse() is extension method /* now the "list" is unchanged and "reveresedList" has same items in reversed order */ /* --- in future the interface of MySpecialList will be changed because of reason XYZ*/ /* the code written in some future */ public class MySpecialList : IList<MySpecialClass> { // ... implementation public MySpecialList Reverse() { // reverse order of items in this collection return this; } } // ... somewhere elsewhere ... MySpecialList list = GetMySpecialList(); // returns list of special classes var reversedList = list.Reverse().ToList(); // .Reverse() was extension method but now is instance method and do something else ! /* now the "list" is reversed order of items and "reveresedList" has same items lake in "list" */ My question is: Is there some way how to prevent this case (I didn't find them)? If is now way how to prevent it, is there some way how to find possible issues like this? If is now way how to find possible issues, should I forbid usage of extension methods? Thanks.

    Read the article

  • Why won't JPA delete owned entities when the owner entity loses the reference to them?

    - by Nick
    Hi! I've got a JPA entity "Request", that owns a List of Answers (also JPA entities). Here's how it's defined in Request.java: @OneToMany(cascade= CascadeType.ALL, mappedBy="request") private List<Answer> answerList; And in Answer.java: @JoinColumn(name = "request", referencedColumnName="id") @ManyToOne(optional = false) private Request request; In the course of program execution, the Request's List of Answers may have Answers added or removed from it, or the actual List object may be replaced. My problem is thus: when I merge a Request to the database, the Answer objects that used to be in the List are kept in the database -- that is, Answer objects that the Request no longer holds a reference to (indirectly, via a List) are not deleted. This is not the behaviour I desire, as if I merge a Request to the database, and then fetch it again, its Answers List may not be the same. Am I making some programming mistake? Is there an annotaion or setting that will ensure that the Answers in the database are exactly the Answers in the Request's List? A solution is to keep references to the original Answers List and then use the EntityManager to remove each old Answer before merging the Request, but it seems like there should be a cleaner way. Thank you!

    Read the article

  • How to get results efficiently out of an Octree/Quadtree?

    - by Reveazure
    I am working on a piece of 3D software that has sometimes has to perform intersections between massive numbers of curves (sometimes ~100,000). The most natural way to do this is to do an N^2 bounding box check, and then those curves whose bounding boxes overlap get intersected. I heard good things about octrees, so I decided to try implementing one to see if I would get improved performance. Here's my design: Each octree node is implemented as a class with a list of subnodes and an ordered list of object indices. When an object is being added, it's added to the lowest node that entirely contains the object, or some of that node's children if the object doesn't fill all of the children. Now, what I want to do is retrieve all objects that share a tree node with a given object. To do this, I traverse all tree nodes, and if they contain the given index, I add all of their other indices to an ordered list. This is efficient because the indices within each node are already ordered, so finding out if each index is already in the list is fast. However, the list ends up having to be resized, and this takes up most of the time in the algorithm. So what I need is some kind of tree-like data structure that will allow me to efficiently add ordered data, and also be efficient in memory. Any suggestions?

    Read the article

  • Grouping by property value and writing group members

    - by Will S
    I need to group the following list by the department value but am having trouble with the LINQ syntax. Here's my list of objects: var people = new List<Person> { new Person { name = "John", department = new List<fields> {new fields { name = "department", value = "IT"}}}, new Person { name = "Sally", department = new List<fields> {new fields { name = "department", value = "IT"}}}, new Person { name = "Bob", department = new List<fields> {new fields { name = "department", value = "Finance"}}}, new Person { name = "Wanda", department = new List<fields> {new fields { name = "department", value = "Finance"}}}, }; I've toyed around with grouping. This is as far as I've got: var query = from p in people from field in p.department where field.name == "department" group p by field.value into departments select new { Department = departments.Key, Name = departments }; So can iterate over the groups, but not sure how to list the Person names - foreach (var department in query) { Console.WriteLine("Department: {0}", department.Department); foreach (var foo in department.Department) { // ?? } } Any ideas on what to do better or how to list the names of the relevant departments?

    Read the article

  • Java using generics with lists and interfaces

    - by MirroredFate
    Ok, so here is my problem: I have a list containing interfaces - List<Interface> a - and a list of interfaces that extend that interface: List<SubInterface> b. I want to set a = b. I do not wish to use addAll() or anything that will cost more memory as what I am doing is already very cost-intensive. I literally need to be able to say a = b. I have tried List<? extends Interface> a, but then I cannot add Interfaces to the list a, only the SubInterfaces. Any suggestions? EDIT I want to be able to do something like this: List<SubRecord> records = new ArrayList<SubRecord>(); //add things to records recordKeeper.myList = records; The class RecordKeeper is the one that contains the list of Interfaces (NOT subInterfaces) public class RecordKeeper{ public List<Record> myList; }

    Read the article

  • Overload the behavior of count() when called on certain objects

    - by Tom
    In PHP 5, you can use magic methods, overload some classes, etc. In C++, you can implement functions that exist is STL as long as the argument types are different. Is there a way to do this in PHP? An example of what I'd like to do is this: class a { function a() { $this->list = array("1", "2"); } } $blah = new a(); count($blah); I would like blah to return 2. IE count the values of a specific array in the class. So in C++, the way I would do this might look like this: int count(a varName) { return count(varName->list); } Basically, I am trying to simplify data calls for a large application so I can call do this: count($object); rather than count($object->list); The list is going to be potentially a list of objects so depending on how it's used, it could be really nasty statement if someone has to do it the current way: count($object->list[0]->list[0]->list); So, can I make something similar to this: function count(a $object) { count($object->list); } I know PHP's count accepts a mixed var, so I don't know if I can override an individual type.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • Why prefer a wildcard to a type discriminator in a Java API (Re: Effective Java)

    - by Michael Campbell
    In the generics section of Bloch's Effective Java (which handily is the "free" chapter available to all: http://java.sun.com/docs/books/effective/generics.pdf), he says: If a type parameter appears only once in a method declaration, replace it with a wildcard. (See page 31-33 of that pdf) The signature in question is: public static void swap(List<?> list, int i, int j) vs public static void swap(List<E> list, int i, int j) And then proceeds to use a static private "helper" function with an actual type parameter to perform the work. The helper function signature is EXACTLY that of the second option. Why is the wildcard preferable, since you need to NOT use a wildcard to get the work done anyway? I understand that in this case since he's modifying the List and you can't add to a collection with an unbounded wildcard, so why use it at all?

    Read the article

  • TFS: Work Items values from External Databases

    - by javarg
    A common question in TFS forums is how to populate list items from external sources in Work Items. Well, there is not a specific functionality to integrate Work Items with external databases or systems when designing them. Actually, you will need to associate your Work Items fields with Global Lists and then have some automated process update this global list regularly. Download this ImportGlobalList.zip file. I’ve put together a simple class (TfsGlobalList) that you can use to update global list items from a .NET application. You could for example, create a simple Console App and schedule it using Windows Scheduler. This App would query a database and then update a TFS Global List using the provided code. Note: the provided code must be run under an account with modify Global List permissions in TFS. Note: remember to refresh Team Explorer in order to see updates in Work Item field values. Enjoy!  

    Read the article

  • UML class diagram - can aggregated object be part of two aggregated classes?

    - by user970696
    Some sources say that aggregation means that the class owns the object and shares reference. Lets assume an example where a company class holds a list of cars but departments of that company has list of cars used by them. class Department { list<Car> listOfCars; } class Company { list<Car> listOfCars; //initialization of the list } So in UML class diagram, I would do it like this. But I assume this is not allowed because it would imply that both company and department own the objects.. [COMPANY]<>------[CAR] [DEPARTMENT]<>---| //imagine this goes up to the car class

    Read the article

  • MapReduce in DryadLINQ and PLINQ

    - by JoshReuben
    MapReduce See http://en.wikipedia.org/wiki/Mapreduce The MapReduce pattern aims to handle large-scale computations across a cluster of servers, often involving massive amounts of data. "The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The developer expresses the computation as two Func delegates: Map and Reduce. Map - takes a single input pair and produces a set of intermediate key/value pairs. The MapReduce function groups results by key and passes them to the Reduce function. Reduce - accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the user's Reduce function via an iterator." the canonical MapReduce example: counting word frequency in a text file.     MapReduce using DryadLINQ see http://research.microsoft.com/en-us/projects/dryadlinq/ and http://connect.microsoft.com/Dryad DryadLINQ provides a simple and straightforward way to implement MapReduce operations. This The implementation has two primary components: A Pair structure, which serves as a data container. A MapReduce method, which counts word frequency and returns the top five words. The Pair Structure - Pair has two properties: Word is a string that holds a word or key. Count is an int that holds the word count. The structure also overrides ToString to simplify printing the results. The following example shows the Pair implementation. public struct Pair { private string word; private int count; public Pair(string w, int c) { word = w; count = c; } public int Count { get { return count; } } public string Word { get { return word; } } public override string ToString() { return word + ":" + count.ToString(); } } The MapReduce function  that gets the results. the input data could be partitioned and distributed across the cluster. 1. Creates a DryadTable<LineRecord> object, inputTable, to represent the lines of input text. For partitioned data, use GetPartitionedTable<T> instead of GetTable<T> and pass the method a metadata file. 2. Applies the SelectMany operator to inputTable to transform the collection of lines into collection of words. The String.Split method converts the line into a collection of words. SelectMany concatenates the collections created by Split into a single IQueryable<string> collection named words, which represents all the words in the file. 3. Performs the Map part of the operation by applying GroupBy to the words object. The GroupBy operation groups elements with the same key, which is defined by the selector delegate. This creates a higher order collection, whose elements are groups. In this case, the delegate is an identity function, so the key is the word itself and the operation creates a groups collection that consists of groups of identical words. 4. Performs the Reduce part of the operation by applying Select to groups. This operation reduces the groups of words from Step 3 to an IQueryable<Pair> collection named counts that represents the unique words in the file and how many instances there are of each word. Each key value in groups represents a unique word, so Select creates one Pair object for each unique word. IGrouping.Count returns the number of items in the group, so each Pair object's Count member is set to the number of instances of the word. 5. Applies OrderByDescending to counts. This operation sorts the input collection in descending order of frequency and creates an ordered collection named ordered. 6. Applies Take to ordered to create an IQueryable<Pair> collection named top, which contains the 100 most common words in the input file, and their frequency. Test then uses the Pair object's ToString implementation to print the top one hundred words, and their frequency.   public static IQueryable<Pair> MapReduce( string directory, string fileName, int k) { DryadDataContext ddc = new DryadDataContext("file://" + directory); DryadTable<LineRecord> inputTable = ddc.GetTable<LineRecord>(fileName); IQueryable<string> words = inputTable.SelectMany(x => x.line.Split(' ')); IQueryable<IGrouping<string, string>> groups = words.GroupBy(x => x); IQueryable<Pair> counts = groups.Select(x => new Pair(x.Key, x.Count())); IQueryable<Pair> ordered = counts.OrderByDescending(x => x.Count); IQueryable<Pair> top = ordered.Take(k);   return top; }   To Test: IQueryable<Pair> results = MapReduce(@"c:\DryadData\input", "TestFile.txt", 100); foreach (Pair words in results) Debug.Print(words.ToString());   Note: DryadLINQ applications can use a more compact way to represent the query: return inputTable         .SelectMany(x => x.line.Split(' '))         .GroupBy(x => x)         .Select(x => new Pair(x.Key, x.Count()))         .OrderByDescending(x => x.Count)         .Take(k);     MapReduce using PLINQ The pattern is relevant even for a single multi-core machine, however. We can write our own PLINQ MapReduce in a few lines. the Map function takes a single input value and returns a set of mapped values àLINQ's SelectMany operator. These are then grouped according to an intermediate key à LINQ GroupBy operator. The Reduce function takes each intermediate key and a set of values for that key, and produces any number of outputs per key à LINQ SelectMany again. We can put all of this together to implement MapReduce in PLINQ that returns a ParallelQuery<T> public static ParallelQuery<TResult> MapReduce<TSource, TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source .SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } the map function takes in an input document and outputs all of the words in that document. The grouping phase groups all of the identical words together, such that the reduce phase can then count the words in each group and output a word/count pair for each grouping: var files = Directory.EnumerateFiles(dirPath, "*.txt").AsParallel(); var counts = files.MapReduce( path => File.ReadLines(path).SelectMany(line => line.Split(delimiters)), word => word, group => new[] { new KeyValuePair<string, int>(group.Key, group.Count()) });

    Read the article

  • About grub rescue command line

    - by khai
    Please help me.. I have two hard disk in my laptop..window 7 and ubuntu.. I have problem to back in option menu..because i enter one of the list of ubuntu.. In option menu,,have a list which operatg system i can choose between ubuntu n window7 but the list stil have another menu for ubuntu..i make mistake to choose the another list of ubuntu..and now i have to fix the command line that have grub rescue.. like this : error : no such partition grub rescue > i dont have any idea to fix this problem..i try the another step that i see at the other question but still dont have any answer..this problem is just i enter the another list but do not have any kernel,problem or somethg like that..

    Read the article

  • How do you select the fastest mirror from the command line?

    - by Evan
    I want to update my sources.list file with the fastest server from the command line in a fresh Ubuntu Server install. I know this is trivially easy with the GUI, but there doesn't seem to be a simple way to do it from from the command line? There are two different working answers to this question below: Use apt-get's mirror: method This method asks the Ubuntu server for a list of mirrors near you based on your IP, and selects one of them. The easiest alternative, with the minor downside that sometimes the closest mirror may not be the fastest. Command-line foo using netselect Shows you how to use the netselect tool to find the fastest recently updated servers from you -- network-wise, not geographically. Use sed to replace mirrors in sources.list. The other answers, including the accepted answer, are no longer valid (for Ubuntu 11.04 and newer) because they recommended Debian packages such as netselect-apt and apt-spy which do not work with Ubuntu. Use sed to replace mirrors in sources.list sudo sed -i -e 's#us.archive.ubuntu.com/mirror.math.ucdavis.edu#g' /etc/apt/sources.list

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >