Search Results

Search found 91480 results on 3660 pages for 'large data in sharepoint list'.

Page 84/3660 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • How to display List<> using ListView?

    - by eriX
    When I use ListView to display a array of my object, I can use the following code: MyObject[] myObject; ... ArrayAdapter<MyObject> itemList = new ArrayAdapter<MyObject>(this, R.layout.list, myObject); setListAdapter(itemList); In case that the input is a list: List<MyObject> myobject; How can I assign it to ListAdapter? Please advise, Thx!

    Read the article

  • Sort a list numerically in Python

    - by Matthew
    So I have this list, we'll call it listA. I'm trying to get the [3] item in each list e.g. ['5.01','5.88','2.10','9.45','17.58','2.76'] in sorted order. So the end result would start the entire list over again with Santa at the top. Does that make any sense? [['John Doe', u'25.78', u'20.77', '5.01'], ['Jane Doe', u'21.08', u'15.20', '5.88'], ['James Bond', u'20.57', u'18.47', '2.10'], ['Michael Jordan', u'28.50', u'19.05', '9.45'], ['Santa', u'31.13', u'13.55', '17.58'], ['Easter Bunny', u'17.20', u'14.44', '2.76']]

    Read the article

  • How to know if a std::list has been modified

    - by Nick
    I have this class: class C { public: C* parent; std::list<C> children; }; I can use this class in this way for example: C root; C child; root.children.push_back(child); // or other method of std::list (es: push_front, insert, ...) // Here child.parent is root // How can I set the parent of child? I want to do this work internally to my class without losing the functionality of std::list, is it possible?

    Read the article

  • Remove duplicates in a String [] list f#

    - by Jose Maria de la Torre
    I need some noob help... I have a String[] list (a list that each element is a Array of strings) I wish to remove all duplicates in the list. But to see if they are duplicates I need to see the fist element in the array. what I have tried is let NotDuplicated = Duplicated.[0] |> Seq.distinct let NotDuplicated = Duplicated |> Seq.distinctBy id but nothing is working... can you guys please help me.. thanks!!

    Read the article

  • Hiding dropdown list when list is empty

    - by Amina
    I currently have a dropdown list that is populated on page load and sometimes the dropdown list doesn't contain anything and is empty. I would like the hide the dropdown list whenever there aren't any items in the list. I thought i could do this via javascript but i am not sure what i could be doing wrong because after adding the javascript it stills appear on the page. dropdown select: <select data-bind="options: endReason() ? endReason().SubReasons : [], value: subReasonId, optionsText: 'Text', optionsValue: 'Value', visible: isChecked" name="subReasons"> </select> This is my Javascript: <script type="text/javascript"> function OnClientDropDownOpening(sender, eventArgs) { var combo = $find("<%= subReasons %>"); items = combo.get_items(); if(items.get_count()==0) { eventArgs.set_cancel(true); } } </script>

    Read the article

  • are there any useful datasets available on the web for data mining?

    - by niko
    Hi, Does anyone know any good resource where example (real) data can be downloaded for experimenting statistics and machine learning techniques such as decision trees etc? Currently I am studying machine learning techniques and it would be very helpful to have real data for evaluating the accuracy of various tools. If anyone knows any good resource (perhaps csv, xls files or any other format) I would be very thankful for a suggestion.

    Read the article

  • Scolling list with jQuery

    - by Chris
    My javascript is not up to scratch at the moment and I am stumped with this! I need to create an animated list with javascript like this one - http://www.fiveminuteargument.com/blog/scrolling-list. What I want is to take a list like so <ul> <li>Item 1</li> <li>Item 2</li> <li>Item 3</li> <li>Item 4</li> <li>Item 5</li> <li>Item 6</li> </ul> And display two at once, then display them in a loop, 2 at a time. Even pseudo code would help to get me started.

    Read the article

  • Find an element in List of List, c#

    - by Sandy
    Hi, I have a list of lists as below: List<List <T> > userList Class T { string uniqueidentifier, string param2, int param2} I have a uniqueidentifier and i need to find the element T in the list that has the same 'uniqueidentifier' value. I can do it using two 'foreach' loops. This does not seem to be nice way doing things. I guess there should be some inbuilt method like 'Find' that does the same thing and is highly optimized.

    Read the article

  • Python finding repeating sequence in list of integers?

    - by tijko
    I have a list of lists and each list has a repeating sequence. I'm trying to count the length of repeated sequence of integers in the list: list_a = [111,0,3,1,111,0,3,1,111,0,3,1] list_b = [67,4,67,4,67,4,67,4,2,9,0] list_c = [1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,23,18,10] Which would return: list_a count = 4 (for [111,0,3,1]) list_b count = 2 (for [67,4]) list_c count = 10 (for [1,2,3,4,5,6,7,8,9,0]) Any advice or tips would be welcome. I'm trying to work it out with re.compile right now but, its not quite right.

    Read the article

  • Order a List (C#) by many fields?

    - by Esabe
    Hi everyone, I want to order a List of objects in C# by many fields, not just by one. For example, let's suppose I have a class called X with two Attributes, A and B, and I have the following objects, in that order: object1 = A = "a", B = "h" object2 = A = "a", B = "c" object3 = A = "b", B = "x" object4 = A = "b", B = "b" and I want to order the list by A attribute first, and when they are equals, by B element, so the order would be: "a" "c" "a" "h" "b" "b" "b" "x" As far as I know, the OrderBy method order by one parameter. Question: How can I order a C# List by more than one field? Thank you very much

    Read the article

  • Second largest number in list python

    - by Manu Lakaster
    So I have to find THE SECOND LARGEST NUMBER IN A LIST. I am doing it through simple loops.My approach is I am going to divide a list into two parts and then find the largest number into two parts and then compare two nuumbers. I will choose the smaller number from two of them. I can not use ready functions or different approaches. Basically, this is my code....But it does not run correctly....Help me please to fix it because I spent a lot of time on it :( Thanks....P.S. Can we use indices to "divide" a list ??? #!/usr/local/bin/python2.7 alist=[-45,0,3,10,90,5,-2,4,18,45,100,1,-266,706] largest=alist[0] h=len(alist)/2 m=len(alist)-h print(alist) for i in alist: if alist[h]>largest: largest=alist[h] i=i+1 print(largest)

    Read the article

  • Python 2.7 creating a multidimensional list

    - by poop
    I don't know why I am having so much trouble creating a 3 dimensional list. I need the program to create an empty n by n list. So for n = 4: x = [[[],[],[],[]],[[],[],[],[]],[[],[],[],[]],[[],[],[],[]]] I've tried using: y = [n*[n*[]]] y = [[[]]* n for i in range(n)] Which both appear to be creating copies of a reference. I've also tried naieve application of the list builder with little success: y = [[[]* n for i in range(n)]* n for i in range(n)] y = [[[]* n for i in range(1)]* n for i in range(n)] I've also tried building up the array iteratively using loops, with no success. In my rapid flurry of attempts to not post something stupidly easy to SO, I came upon a solution: y = [] for i in range(0,n): y.append([[]*n for i in range(n)]) Is there an easier/ more intuitive way of doing this?

    Read the article

  • Move <option> to top of list with Javascript

    - by Adam
    I'm trying to create a button that will move the currently selected OPTION in a SELECT MULTIPLE list to the top of that list. I currently have OptionTransfer.js implemented, which is allowing me to move items up and down the list. I want to add a new function function MoveOptionTop(obj) { ... } Here is the source of OptionTransfer.js // =================================================================== // Author: Matt Kruse // WWW: http://www.mattkruse.com/ // // NOTICE: You may use this code for any purpose, commercial or // private, without any further permission from the author. You may // remove this notice from your final code if you wish, however it is // appreciated by the author if at least my web site address is kept. // // You may *NOT* re-distribute this code in any way except through its // use. That means, you can include it in your product, or your web // site, or any other form where the code is actually being used. You // may not put the plain javascript up on your site for download or // include it in your javascript libraries for download. // If you wish to share this code with others, please just point them // to the URL instead. // Please DO NOT link directly to my .js files from your site. Copy // the files to your server and use them there. Thank you. // =================================================================== /* SOURCE FILE: selectbox.js */ function hasOptions(obj){if(obj!=null && obj.options!=null){return true;}return false;} function selectUnselectMatchingOptions(obj,regex,which,only){if(window.RegExp){if(which == "select"){var selected1=true;var selected2=false;}else if(which == "unselect"){var selected1=false;var selected2=true;}else{return;}var re = new RegExp(regex);if(!hasOptions(obj)){return;}for(var i=0;i(b.text+"")){return 1;}return 0;});for(var i=0;i3){var regex = arguments[3];if(regex != ""){unSelectMatchingOptions(from,regex);}}if(!hasOptions(from)){return;}for(var i=0;i=0;i--){var o = from.options[i];if(o.selected){from.options[i] = null;}}if((arguments.length=0;i--){if(obj.options[i].selected){if(i !=(obj.options.length-1) && ! obj.options[i+1].selected){swapOptions(obj,i,i+1);obj.options[i+1].selected = true;}}}} function removeSelectedOptions(from){if(!hasOptions(from)){return;}for(var i=(from.options.length-1);i=0;i--){var o=from.options[i];if(o.selected){from.options[i] = null;}}from.selectedIndex = -1;} function removeAllOptions(from){if(!hasOptions(from)){return;}for(var i=(from.options.length-1);i=0;i--){from.options[i] = null;}from.selectedIndex = -1;} function addOption(obj,text,value,selected){if(obj!=null && obj.options!=null){obj.options[obj.options.length] = new Option(text, value, false, selected);}} /* SOURCE FILE: OptionTransfer.js */ function OT_transferLeft(){moveSelectedOptions(this.right,this.left,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferRight(){moveSelectedOptions(this.left,this.right,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferAllLeft(){moveAllOptions(this.right,this.left,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferAllRight(){moveAllOptions(this.left,this.right,this.autoSort,this.staticOptionRegex);this.update();} function OT_saveRemovedLeftOptions(f){this.removedLeftField = f;} function OT_saveRemovedRightOptions(f){this.removedRightField = f;} function OT_saveAddedLeftOptions(f){this.addedLeftField = f;} function OT_saveAddedRightOptions(f){this.addedRightField = f;} function OT_saveNewLeftOptions(f){this.newLeftField = f;} function OT_saveNewRightOptions(f){this.newRightField = f;} function OT_update(){var removedLeft = new Object();var removedRight = new Object();var addedLeft = new Object();var addedRight = new Object();var newLeft = new Object();var newRight = new Object();for(var i=0;i0){str=str+delimiter;}str=str+val;}return str;} function OT_setDelimiter(val){this.delimiter=val;} function OT_setAutoSort(val){this.autoSort=val;} function OT_setStaticOptionRegex(val){this.staticOptionRegex=val;} function OT_init(theform){this.form = theform;if(!theform[this.left]){alert("OptionTransfer init(): Left select list does not exist in form!");return false;}if(!theform[this.right]){alert("OptionTransfer init(): Right select list does not exist in form!");return false;}this.left=theform[this.left];this.right=theform[this.right];for(var i=0;i

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • Negative ItemCount in SharePoint Document Library

    - by ccomet
    What can be done about negative numbers in library item counts? ItemCount is a read-only property, what are you supposed to do when it is drastically incorrect? Earlier last week, I was doing some testing involving the copying and moving of files and folders from one document library to another. I was transfering the items from our actual document library to a sandbox "Test" library that I used to run all sorts of object model and workflow testing in before migrating to the public lists and libraries. I noticed that with files, things worked correctly, but when I copied a folder that had a file inside it (using SPFolder.CopyTo()), the item count for the test library did not actually update. Since this testing was mostly playing around, I paid it little heed. Today I was back in the test library to test a different workflow (regarding PDF conversion). While I was there, I decided to delete the folder I left last week since I didn't need it anymore. And that's when I saw the item count for the list drop to -1 in the All Site Content View. When I deleted the new PDF I had just uploaded, it then dropped to -2! I even checked with the object model... getting an instance of the library I checked the ItemCount property... lo and behold it was also -2. Is there any process which runs in the background, kinda like the one that cleans up workflow history, which will correct this kind of issue? Or is a programmer expected to keep watch for this kind of situation and come up with calculations to compensate the "count penalty", as it were?

    Read the article

  • Data table to Excel conversion leaves my server side button non responsive in SharePoint WebPart

    - by Nikhil Vaghela
    We have a web part on which we display some data in a grid. We are exporting grid's underlying datatable to Excel and displaying Open - Save - Cancel dialouge box on click of a server side button. Following is the code we are executing on click of server side button. this.Page.Response.Clear(); this.Page.Response.AppendHeader("Content-Disposition", "attachment; filename=MyTasks.xls"); this.Page.Response.ContentType = "application/ms-excel"; this.Page.Response.Write("...here goes my well formated html...."); this.Page.Response.End(); Problem is that when i click on Cancel in the diaglouge box then dialogue box to Open/Save excel disappears but all the server side buttons placed on my web part becomes non responsive, on click of any of those buttons their server side click event does not get fired !!! Any idea ? Thanks.

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • NSURLConnection receives data even if no data was thrown back

    - by Anna Fortuna
    Let me explain my situation. Currently, I am experimenting long-polling using NSURLConnection. I found this and I decided to try it. What I do is send a request to the server with a timeout interval of 300 secs. (or 5 mins.) Here is a code snippet: NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageAllowedInMemoryOnly timeoutInterval:300]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&resp error:&err]; Now I want to test if the connection will "hold" the request if no data was thrown back from the server, so what I did was this: if (data != nil) [self performSelectorOnMainThread:@selector(dataReceived:) withObject:data waitUntilDone:YES]; And the function dataReceived: looks like this: - (void)dataReceived:(NSData *)data { NSLog(@"DATA RECEIVED!"); NSString *string = [NSString stringWithUTF8String:[data bytes]]; NSLog(@"THE DATA: %@", string); } Server-side, I created a function that will return a data once it fits the arguments and returns none if nothing fits. Here is a snippet of the PHP function: function retrieveMessages($vardata) { if (!empty($vardata)) { $result = check_data($vardata) //check_data is the function which returns 1 if $vardata //fits the arguments, and 0 if it fails to fit if ($result == 1) { $jsonArray = array('Data' => $vardata); echo json_encode($jsonArray); } } } As you can see, the function will only return data if the $result is equal to 1. However, even if the function returns nothing, NSURLConnection will still perform the function dataReceived: meaning the NSURLConnection still receives data, albeit an empty one. So can anyone help me here? How will I perform long-polling using NSURLConnection? Basically, I want to maintain the connection as long as no data is returned. So how will I do it? NOTE: I am new to PHP, so if my code is wrong, please point it out so I can correct it.

    Read the article

  • Compact data structure for storing a large set of integral values

    - by Odrade
    I'm working on an application that needs to pass around large sets of Int32 values. The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,000,000. I expect distribution of ids in any given set to be effectively random over this range. The operations I need on the set are dirt simple: Add a new value Iterate over all of the values. There is a serious concern about the memory usage of these sets, so I'm looking for a data structure that can store the ids more efficiently than a simple List<int>or HashSet<int>. I've looked at BitArray, but that can be wasteful depending on how sparse the ids are. I've also considered a bitwise trie, but I'm unsure how to calculate the space efficiency of that solution for the expected data. A Bloom Filter would be great, if only I could tolerate the false negatives. I would appreciate any suggestions of data structures suitable for this purpose. I'm interested in both out-of-the-box and custom solutions. EDIT: To answer your questions: No, the items don't need to be sorted By "pass around" I mean both pass between methods and serialize and send over the wire. I clearly should have mentioned this. There could be a decent number of these sets in memory at once (~100).

    Read the article

  • Show SVG files on Sharepoint 2007

    - by Nicolas Irisarri
    I'm building a WSS site which has to show SVG files stored on WSS. I'm trying to use <object> tag to show it and it doesn't show, however, if I use <embed> it works ok. Im'using IE8 and IE7 I've been reading and everyone tells IE prefers <Object> over <embed>, but in WSS it doesn't work this way. To display the file I'm using a web content editor webpart with this code: <object type="image/svg+xml" data="/samples/sample.svg" name="owMain" width="400" height="150"> </object> Any clue??

    Read the article

  • Deleting list items via ProcessBatchData()

    - by q-tuyen
    You can build a batch string to delete all of the items from a SharePoint list like this: 1: //create new StringBuilder 2: StringBuilder batchString= new StringBuilder(); 3: 4: //add the main text to the stringbuilder 5: batchString.Append(""); 6: 7: //add each item to the batch string and give it a command Delete 8: foreach (SPListItem item in itemCollection) 9: { 10: //create a new method section 11: batchString.Append(""); 12: //insert the listid to know where to delete from 13: batchString.Append("" + Convert.ToString(item.ParentList.ID) + ""); 14: //the item to delete 15: batchString.Append("" + Convert.ToString(item.ID) + ""); 16: //set the action you would like to preform 17: batchString.Append("Delete"); 18: //close the method section 19: batchString.Append(""); 20: } 21: 22: //close the batch section 23: batchString.Append(""); 24: 25: //preform the batch 26: SPContext.Current.Web.ProcessBatchData(batchString.ToString()); The only disadvantage that I can think of right know is that all the items you delete will be put in the recycle bin. How can I prevent that? I also found the solution like below: // web is a the SPWeb object your lists belong to // Before starting your deletion web.Site.WebApplication.RecycleBinEnabled = false; // When your are finished re-enable it web.Site.WebApplication.RecycleBinEnabled = true; Ref [Here](http://www.entwicklungsgedanken.de/2008/04/02/how-to-speed-up-the-deletion-of-large-amounts-of-list-items-within-sharepoint/) But the disadvantage of that solution is that only future deletion will not be sent to the Recycle Bins but it will delete all existing items as well which user do not want. Any idea to prevent not to delete existing items? Many thanks in advance, TQT

    Read the article

  • How can I scrape specific data from a website

    - by Stoney
    I'm trying to scrape data from a website for research. The urls are nicely organized in an example.com/x format, with x as an ascending number and all of the pages are structured in the same way. I just need to grab certain headings and a few numbers which are always in the same locations. I'll then need to get this data into structured form for analysis in Excel. I have used wget before to download pages, but I can't figure out how to grab specific lines of text. Excel has a feature to grab data from the web (Data-From Web) but from what I can see it only allows me to download tables. Unfortunately, the data I need is not in tables.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >