Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 1041/2386 | < Previous Page | 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048  | Next Page >

  • Determining percentage of students between certain grades

    - by dunc
    I have an Excel spreadsheet with the following data: #-----------------------------------------------------------------------------------------------------------------------------------# # Student # KS2 Grade # Target # Expected 1 # Expected 2 # Expected 3 # FSM Status # Gifted & Talented # #-----------------------------------------------------------------------------------------------------------------------------------# # User 1 # 4 # 6 # 7 # 5 # 6 # Y # N # # User 2 # 3 # 5 # 5 # 4 # 4 # N # N # # User 3 # 5 # 6 # 6 # 6 # 7 # N # N # # User 4 # 4 # 6 # 5 # 6 # 6 # N # Y # # User 5 # 5 # 7 # 7 # 6 # 7 # N # N # # User 6 # 3 # 4 # 4 # 4 # 4 # N # N # # User 7 # 3 # 4 # 5 # 3 # 4 # Y # Y # #-----------------------------------------------------------------------------------------------------------------------------------# What I'd like to do is determine the percentage of students with certain levels, i.e. a range of levels. For instance, in the data above, I'd like to determine the % of all students that have a Target level of 5 - 7. I'd then like to also expand the formula to specify % of Gifted & Talented students with a Target level of 5 - 7. Is this possible in Excel? If so, where do I start?

    Read the article

  • Optimal partition setup for Windows 7 on SSD

    - by Mike C.
    Hello, I'm setting up my system with Windows 7 right now, with knowledge that I am going to be getting a SSD in the future. What optimizations/setup should I do now to make a smoother transition in the future? Should I created two partitions - one for the OS and one for the data? Assuming this is the case, I would be able to easily ghost my OS partition onto the SSD in the future. If so, what should go on the OS drive besides the OS? Program files? If I install games or Visual Studio, should it go on the OS drive or the data drive? I can see the SSD filling up fast if I install all my program files on there. I've seen a few posts where people talk about leaving a portion of the SSD unformatted - is this something I should do? Thanks!

    Read the article

  • Setting 'bookmarks' for command prompt on Windows XP

    - by The Communist Duck
    I know I can change the command prompt to startup on C:\ rather than F:\Documents And Settings or wherever. But I was wondering if it's possible to set bookmarks or some other command that allows me to switch to locations. For example, my Python projects have a long (about 7 subdirectories) address (think that's the word), and as such it takes a while to cd to them. What I would like to do is something like gotodirectory PythonProjects , and have it cd to where I have specified this, be it C:\WINDOWS or F:\Games\Steam\steamapps\common\some game\some game data\some more game data.

    Read the article

  • How to set up server to be accessed from mobile devices [closed]

    - by Kgrover
    I need to set up a server that will eventually be accessed from an Android application. I don't have experience with how to go about this, but I've seen MySQL servers (Amazon EC2). I would need to store data on the server remotely (again, from a mobile device) and fetch it to display. What would be the best kind of server to use? I'm guessing I would only need around 50 GB of space. Is it possible to use a network drive and set that up as a remote server with an IP address? I would need to upload and extract data through java on Android. This is my first question on ServerFault, and I'm not sure if it's the appropriate forum. If not, please redirect me.

    Read the article

  • How can I rename files and subdirectories in a copied directory based on changes in the original?

    - by GaryF
    I have a directory structure with many hundreds of files and folders underneath it for organising files (in this case photos). I create backups of that directory structure by rsyncing it to identical copies on an external drives periodically. These drives may be offsite some of the time. I want to restructure and rename the files and directories in the original and then, later, when I have an external drive onsite, be able to run some tool that will cause these structural and naming changes to happen on the backup. If I just us rsync, it'll have to recopy much of the data to the backup drive, which I'd rather avoid due to the sizes involved. How can I get the changes I make to the original directory into the backups, as they become available, without having to recopy/rsync the data?

    Read the article

  • "net use /delete" question

    - by tinmaru
    I want to delete a network connection. When I type net use I get this: Microsoft Windows XP [version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\totonet use État Local Distant Réseau OK M: \192.168.5.138\share Réseau Microsoft Windows OK R: \192.168.2.18\tools Réseau Microsoft Windows OK \192.168.2.43\data Réseau Microsoft Windows La commande s'est terminée correctement. The syntax for deleting a network map is : net use /delete X: if I want to delete a specific connection net use /delete * * if I want to delete all connections. How can I delete the \192.168.2.43\data connection which, as you can see, is not link to any logical letter?

    Read the article

  • Windows 8 Task Manager RAM Usage Accuracy

    - by user264892
    The new Task Manager has a great UI in windows 8, however, there are some discrepancies in the data I can not account for: Machine: 8 GB of total ram. (This is a physical machine, not a virtual) The processes tab shows 45% of Memory utilized. The listed process do not add up to 3.5 GB of RAM, but instead add up to 0.948 GB. There is no "processes for all users" option. The performance Tab Shows: In use : 3.6 GB Available: 4.4 GB Committed : 4.1 /9.2 GB Cached: 3.7 GB Paged Pool: 376 MB Non-paged pool: 135 MB My reading of this says I have ALOT of "cloaked" processes running some where eating my ram. How do I interpret this data and how do I verify it?

    Read the article

  • Can i use a Windows 2008 r2 Cluster for file redundancy

    - by JERiv
    I'm researching a sever clustering architecture as a redundancy and backup solution for a client, and something that isn't made clear is whether or not i can use server clustering to replace a file server with backup solution. Forgive my Elementary understanding of server clustering but supposing: 2 Sites (NJ, CA) Identical Servers at each site setup as a Remote Site Cluster nodes with Windows Enterprise server 2008 r2 Services: File, Terminal, AD, and maybe DNS Will the following will be true: Files (including data drives) will be synced between the two servers eliminating the need for third party backup/mirroring software to sync/backup files. Also supposing i use roaming profiles w/ folder redirection; How will client computer in the WAN access their data through the cluster (i.e. will they automatically choose the best route)

    Read the article

  • Which particular file caused "Delayed write failed" error?

    - by user35020
    I sometimes get this error when resuming from hibernation: Delayed Write Failed: Windows was unable to save all the data for the file G:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. I know this is caused because the hard drive (G:, an external USB drive) was (a) plugged in when I hibernated and wasn't ready at wake-up, or (b) I simply forgot to plug it when resuming from hibernation. My question is: is there any way to see which particular file/folder/etc failed to be written? The hard drive functions correctly before and after, so there seems to be no permanent damage. Is there a detailed log someplace or a utility? I've searched but found nothing. Thanks for any help!

    Read the article

  • Can I convert my database/script to UTF-8 ?

    - by Mohannad Otaibi
    How can I convert a database to support UTF-8 and convert it's old data from what ever encoding they're in to UTF-8 ? Extra Info: I'm running a server which has many websites on it, and one of them is running WHMCS (php script to manage hosting clients). WHMCS has an iPhone application where i can browse it through iPhone, the problem is that this application will only run if everything in my website is in UTF-8 encoding. I was using windows-1256 as encoding in my script's settings, and i tried changing that in some point of time to UTF-8 for a while then changed it back to windows-1256 so, the data in the database are some inserted using UTF standards and most of them are windows-1256 If someone could clear the picture for me, Do I need to convert every database on the server or just one DB ? what should I change? If i had to do that manually, I'll do it but I need some expert advise.

    Read the article

  • How do you create a report in Word (or other documentation software) that is linked directly with Ex

    - by NoCatharsis
    I believe my question may be best answered by using Access since that's more what it's made for. However, I don't have a license for Access here at work and trying to get one is pulling teeth. So I'm curious if there is any way to compile reports with data in an Excel 2007 sheet. The output can be .doc, .docx, .pdf - or anything else if there's a decent piece of free 3rd party software. This might be easiest solved by just creating another sheet in the same workbook and directly linking to the data I want to display in a report-esque format. But I wanted to see if SU could offer some more creative solutions.

    Read the article

  • I have a bunch of CHK files on my USB Drive that used to be my stories that I saved on there. How do I get them back?

    - by Susana
    Ok, so I am not sure why, but my USB flash drive isn't showing all of my stories that I typed and saved. It might be because I removed the USB flash drive without ejecting it safely. All of the data was there on my flash drive, I just couldn't see it. The capacity was almost full so I'm pretty sure the data was there. So, when I decided to run and check to see if there were any problems, the computer found that there were. I think it found my files, but now they are CHK files and I don't know how to get them back. Can someone please help? This is my life's work here!

    Read the article

  • mirror sql server 2008 to AWS instance from our datacenter?

    - by Alex
    We are currenlty running on hosted pos system locally and would like to mirror to AWS. We are new to AWS and would like to know the most cost effective way to do this? We have 2 DB and 2 web servers right now in one cabinet in CA. One tape drive, one firewall, one SNA. We are thinking to replicate our system in AWS (using sql server 2008) and just mirror both systems and use a witness server between them to keep the data in sync? The goal is, if CA datacenter goes down, AWS keeps running. User see no downtime. All data is synced. Is anyone doing something similar? Would this be practical to just use AWS in this fashion? Thanks

    Read the article

  • Proxy server modifying request/response upon pass through

    - by jamiei
    I would like to set up a non-transparent proxy server that will selectively modify HTTP requests and responses as it passes them through. The motivation for wanting to modify the data on the fly and not at the original source are not part of the question. I'm hoping there is a solution which allows the modification scripts to be written in a mainstream programming language. I am aware of guides for Squid which allow you to flip images etc on the way through but I was hoping for a slightly more established and less hackish solution. Does anyone know of an established open source project or environment (Any major platform) which would allow me to script in arbitrary modifications to the data being passed through?

    Read the article

  • Why my backup to USB Drive is too slow?

    - by Jonas
    I have tried several backup solutions for my data and none of them was good enough. I basically want to make a copy of my files to an attached USB Drive from time to time. I don't mind starting my backups manually, since the USB Drive is not always connected. My problem is that my data contains a lot of files (a huge amount), so backing up takes forever (more than 20 hours). Using "rsync" an other similar solutions is not working because the I/O needed to check the file for changes takes longer than the time to actually copy it. Any suggestions?

    Read the article

  • Scaling web application with SQL Server 2008 database

    - by John
    I have a database which has 90% of read only tables. 10% of the tables has writable data. We need to scale the ASP.NET application.We need to add more users who will not be writing to the database. We are thinking of adding another server and routing the users who need read only access to that server. Is there a way to replicate just some tables to another database server. Since the 90% of data doesnt change, we don't want to setup any full database replication. Please advise.

    Read the article

  • Missed something? Cant upload files to server (permissions)

    - by Camran
    I can upload files as "root" to the Ubuntu server. Then I created a user (me). Next I added the user to the group www-data. Then assigned rwx permissions to www-data. Next, when I try to upload, delete or modify files VIA FILEZILLA, I cant. But via the terminal, I can change files using sudo command. What should I do to be able to upload files without getting the "permission denied" in filezilla? If you need more input let me know. Thanks

    Read the article

  • Who should own /var/www? [closed]

    - by John
    Possible Duplicate: How should I structure my users/groups/permissions for a web server? I've seen a few answers to this on the internet, but I'm looking for a definitive answer. I have a new Ubuntu 12.04 LTS server with LAMP. Apache is set to run as "www-data" and /var/www is set as having "root" as the owner and "root" as the group. The permissions for /var/www are "drwxr-xr-x" which I believe translates to 755 numerically. I know that /var/www should not be owned by "www-data" because then buggy/malicious code could have a field day. However, should I keep it as root:root (inconvenient) or should I change it to ubuntu:ubuntu, the default user that Ubuntu preconfigures for you to log in with? Should the permissions remain at 755? I've been administrating systems for a while with no big security issues, but I'm trying to get really serious about security, double-check everything, and make sure that there are no gaps in my knowledge.

    Read the article

  • Excel: How to Compare Column Values in a Row

    - by spazzie
    I have a bunch of comparison data and a lot of entries being compared. As an example, say my sheet looks like this, give or take a few columns: Item Price1 Quantity1 Price2 Quantity2 Price3 Quantity3 001 $123 12 $456 24 $789 48 002 $100 95 $200 5 $300 51 For each item (row), I want to be able to look at all of the Quantity columns and find which one has the highest quantity. Ideally I'd be able to run a condition of some sort on the entire excel sheet at once, and it would highlight in red the highest quantity. So the results would be a red "48" (qty3) for Item 001 and a red "95" (qty1) for Item 002. Only the color would change, not any data, and no new rows would need to be created. Let me know if you need more info

    Read the article

  • New 2.5" hard drive for laptop - What to compare?

    - by TFM
    I'm having trouble finding a new (bigger) hard drive for my laptop. I came across some criteria that I never thought about before, while I was checking a price comparison site. Of course, that made me more confused. First of all, I will probably go with something above 250 GB, and at least 16 MB cache. Now the confusing part: Most new drives are 7200 RPM, as opposed to good old 5400 RPM. 7200 RPM used to mean extra heat, but suddenly it's almost impossible to find a 5400 RPM in 2.5". What did I miss? Second question: Internal data transfer rate. My old drive has a rate of around 60 MB, but new drives have values like 100 MB or more (e.g. 150 MB). How important is this "internal data transfer rate"?

    Read the article

  • vSphere 4 - how can I cancel a file copy in progress?

    - by DrStalker
    VMware vSphere 4 SAN storage with multiple data-stores No vCenter I shut-down a virtual machine and using the data-store browser did a copy/paste to copy the VM to a new datastore with additional space. The file copy performance was very poor, and due to time constraints I decided to cancel the copy task. However the copy task showing in the vsphere client can not be cancelled; the cancel option is disabled. Currently I am not able to start the machine in it's original location as the disk files are locked for the copy. How can I abort the copy? I tried deleting the target directory but this did not abort the copy task.

    Read the article

  • 1TB HDD making strange noise (not a common one)

    - by Darkkurama
    I built a new PC some days ago, and everything seems perfect, except that the 1 TB HDD I cloned from my old 500 GB HDD is making a deep weird sound. First of all, every time I access the disk, I hear a deep sound, and when the PC is turning on, I hear some clicking (the rapid clicking is my mouse, I'm opening and closing folders to trigger the vibrating deep weird sound I'm describing). I'm using this 1TB disk for data mainly (I use a SSD as the OS). As background information, the disk is a seagate barracuda 7200 rpm which was RMAd and replaced with a refurbished one. Maybe the refurbished disks make these noises? should I worry about my data? (although the disk is working normal and passed a seagatetools short generic test? Thanks! PS: I recorded the sounds, just click on the links. Thanks

    Read the article

  • Migrating Split Access Database from one domain to another (not working, details in Q)

    - by Expo_Rob
    Some background: I'm a programmer, not a network administrator, who has been asked to migrate some accounting software (Integrated Office Accounting version 3.2) from an existing domain (OLD_NETWORK) to a new domain (NEW_NETWORK). No-body at the office knows how it works under the hood. It is a split Access 2000 database with the back-end shared and on a file server (which is also the DC) using mapped drives. The DC is NT Server 4 SP 6. The new server is server 2003. The two networks are running independently (ie: two computers on each desk). I have been able to get new computers set up on NEW_NETWORK and working with the IOA software just perfectly but for one problem: The company here uses other entirely separate databases which access the tables IOA maintains (specifically the 'customers' table) via links. To switch between these systems, you press F11 then File-Open the appropriate database and away you go (this is necessary to maintain the permissions that the IOA system uses to protect the customers table). The entire database is Access 2000, the links go to other Access databases, SQL-Server is not involved in any way, nor is a migration to SQL server likely. If I can't migrate anything over, everything will stay as it is, and the NEW_NETWORK computers will not be used. The problem: When I try and update these seperate databases (I shall call one "BANK_ACCOUNT", but the name does not matter), it says "this recordset cannot be updated". It also will sometimes not pull information out of the 'customers' table (ie: date_entered) when looking at a report of everyone who opened a bank account on a certain day (ie: today). I have tried: Giving 'everyone' full control via. shared directory permissions Giving 'everyone' full control on a file system level Checking the permissions within Access (everyone has full read/write on all tables) Copying the entire server contents from one file server to another (ie: xcopy everything) Copying the entire local client files from one computer to another, putting them in the exact same position in the file system, with the same permissons (or full control to 'everyone'). Running as an Administrator Taking one of the NEW_NETWORK computers, having it join OLD_NETWORK and run the software (direct copy from a working system with identical drive mappings), this did not work Weeping openly My Question: Is there anything else I can try? (sorry for this being so long)

    Read the article

  • Best way for an external (remote) graphics designer to style ASP.NET MVC 4 app?

    - by Tom K
    My customer has his own graphics designer he wants to use to style his web application we're building in ASP.NET MVC 4. Our solution is in Bitbucket, but if he can't run it what choices do we have? I doubt he uses Visual Studio 2012. One idea is for us to publish to our solution to a file system, send it to him, have him create a local IIS website on his machine (assuming he isn't using a Mac). Mocking data or pointing to a test SQL in Azure isn't a problem. Then he can make changes to .css and .cshtml files. Will this even work? The point is that he needs to be able to test his changes. I know he can modify the views and just check-in. But he needs to deliver a working design. So it seems inefficient. The graphics designer will have access to our test site so he can see how it works, what data we have and fields. Another idea is for him to build a static mock site using just HTML/CSS. Later I'd integrate his styles into customer's solution, split his html into partial views which we use and add Razor syntax. Again, we'd like to leverage graphics designer for all of this. Is there a best practice documented around this subject? How do other teams deal with this situation?

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048  | Next Page >