Search Results

Search found 24106 results on 965 pages for 'usb key'.

Page 722/965 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • How to install RAID drivers on already installed Windows 7?

    - by happysencha
    64-bit Windows 7 Ultimate 6GB RAM Intel i7 920 Intel X25-M SSD 80GB 2,5" Club 3D Radeon HD5750 GA-EX58-UD4P Motherboard I've been running fine with Windows 7 installed on the SSD. I wanted to create an mirrored Raid-1 setup for backups using two hard disks, so I ordered two Samsung HD203WI. This motherboard supports two different RAID controllers, the Intel's ICH10R and Gigabyte's SATA2 SATA controller. There are 6 SATA ports behind the ICH10R and 2 SATA ports for the Gigabyte controller. I googled around and seemed that the ICH10R is a better choice and since then I've been trying to make it work. When I activate the [RAID] mode from BIOS, the Windows 7 gives BSOD exactly as described by this guy: "Windows 7 will start to boot, it gets to the screen where there are 4 colors coming together and it blue screens and restarts no matter what I do." First thing I did: turned off the RAID and booted to Windows and tried to install the SATA RAID drivers from Gigabyte. I launch the driver installation program and it gives "This computer does not meet the minimum requirements for installing the software" error. I then tried Intel's Rapid Storage Technology drivers (which apparently is the same as the one offered at Gigabyte's site), but it resulted in exactly the same error. I then detached the new Samsung hard disks from the SATA ports, but left the [RAID] enabled in BIOS. To my surprise, it still BSOD'd, so at this point I knew it is an OS/driver issue. Also, I tried with the Gigabyte's RAID enabled (while the ICH10R RAID disabled) and it booted just fine. So then I thought, that maybe I can't install the RAID drivers from within the OS. So I caused the BSOD on purpose once again, and then with ICH10R RAID activated and Samsung hard disks attached, I choose the Windows 7 Recovery mode in the boot menu. It sees some problem(s), tries to repair, does not succeed and does not ask for drivers (which I put on a USB stick) to install. I also tried to use the command-line in the recovery: "rundll32 syssetup, SetupInfObjectInstallAction DefaultInstall 128 iaStor.inf" but it gave "Installation failed." So I'm clueless how should I proceed. Do I really need to re-install Windows 7 and load RAID drivers in the Win7 setup? I don't want to install any OS on the RAID, the Windows 7 is and will be on the SSD. I just want to have a RAID-1 backup using those two hard disks. I mean why would I need to re-install operating system to add RAID setup?

    Read the article

  • Recovering damaged external hard disk by installing internally

    - by nfarshchi
    I had a 1TB Western Digital (My book series) 3.5" USB3. One day, the SATA to USB3 converter board was damaged and has not worked since. I decided to open the cover and use the HDD as an internal HDD. When I attached the HDD to my PC and booted up in Windows, it asked me which type of ????? I want to use "MBR or GBR" (I dont remember the exact question) I chose MBR and Windows gave me a 1TB empty Hard drive. I tried to recover with recover my files and some other recovery programs but no success. Some one told me that you should choosed GBR instead of MBR . How can I do that now? Another guy told me that the SATA to USB3 converter board is coded to save data on HDD and you can not use them internally without losing data, and I should find another SATA to USB3 board (exact same). It is impossible to find because they are not produced any more. Please help me to find a solution to bring back my data. UPDATE I have 1TB WD "Mybook" USB 3. the board that convert sata to usb3 was damaged. so when the HDD was in the box computer did not recognize it. I opened the box and remove HDD to use it internal. after connecting to my PC windows showed me one massage that I had two choice MBR or GPT I choosed MBR one and windows gave me 1TB empty new volume. I tried many recovery software to recover my data but no success. I brought it to one expert recovery company and they told me the converter board (SATA to USB3) make some encryption on data and with out that board you cannot recover any thing. so I bought another empty WD box and put the HDD inside but even after that also there is no file. I tried to recover again in this state but no success. so I have some unanswered question. does this converted boards make any password or encryption? if yes how can I solve it? does using many recovery programs affected my data? any suggestion or solution for bring back my data? I had use recovery programs such as : recover my files , EaseUS data recovery, easy recovery, test disk, Ontrack easy recovery . Note: when I was using test disk it asked me to choose which partition table I want to use. as it was I choose NTFS, does this made any change on data?

    Read the article

  • A Windows Update Prevented Live Discs From Working?

    - by user88311
    First off I'll state my system specs. Acer Aspire M1100 Windows Vista Home Basic 32bit OEM 2 Gigs of DDR2 ram 160Gig hard drive 2.7Ghz AMD Athlon 64 processor ATI Radeon X1250 Graphics card A few days ago my computer did automatic updates and updated windows defender to KB915597 (Definition 1.135.415.0), After which when shutting down and starting up I would receive BSOD with the information BUGCODE_USE_DRIVER and 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x877330000) upon where my computer would not start up with any USB devices plugged in and it always require me to run startup repair before it started. Upon when I first started it up and was able to fully boot windows, I had no use of the mouse so I was unable to install the fix that the windows solutions center brought up on my screen, so I restarted again and installed the fix hoping it would cease the problem, it did not. Upoon starting up after installing the fix and restarting I was confronted with the BSOD 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x83291000) at which I found the startup repair could not fix the problem and I restored, as I most like should have in the first place. After going through that I read that simply installing the latest defender version from the microsoft site had fixed this problem for others, so I did that, to find I still received the BSOD's. So in a attempt to find a fix to the problem I went to the microsoft answers site to try to find a way to fix the problem, there I was told to simply disable defender and reboot to see if that fixed the problem, upon doing this my computer would no longer even startup, when I boot normally I get to just when the loading screen finishes and then my computer restarts and when I run startup repair, it runs for about 15 seconds and then my computer restarts as well. I have tried running ubuntu live discs in order to simply access the drive and simply copy and paste the 2 month old physical backup I have of my C drive to the C drive, but whenever I run the live OS when it gets to the end of the load screen and is about to boot, the computer again restarts, yet if I put in a gparted disc, I am able to boot it fully, although it does not give me access to the file system just partition managing and when I attempt to access the internet through it, the computer once again restarts. So my question is, how could the update and what has happened prevent me from running the live OS's properly?

    Read the article

  • Windows 7 booting and startup repair issues

    - by aardvark
    I have a MSI FR720 laptop with Windows 7 and Lubuntu partions. For quite a while (6 months or so) I've been having issues booting from my hard drive, it'd take me between 5 minutes and several hours for me to be able to have it recognize the hard drive as a bootable device. I did several disk checks on it, and my hard drive seems in perfect condition, and the fact that booting would usually only work after removing the hard drive and trying to reset it in its slot or lightly shaking it makes me think it had something to do with the connection in the hard drive slot as opposed to the hard drive itself. I was having particular issues with it detecting the hard drive today so I decided to try booting it from an external hard drive dock. It detected it first try and so far has had no problems finding the bootable partitions on my hard drive. When I selected my Windows 7 partition from the boot menu it said that it hadn't been shut down properly last time and needed startup repair. I've done this several times over the last 6 months, so this is hardly unusual. I do startup repair, it fails, and then I try to do a system restore. The system restore also failed, and it says that no files were changed. I restart and try it again. However, this time when I get to the startup repair it's not detecting a Windows OS. I tried clicking next and doing a startup repair but the repair is always failing. If I ignore the startup repair option and instead select "Launch windows normally" it will get to the windows animation, stop halfway through and then crash into a BSoD. I can't read the error on the screen because it immediately switches to back and tries to restart. This is my first time asking a question like this online, so let me know if I need to provide any extra information and I'll do my best to give it I tried using diskpart to find the list of partitions and see if one's labelled as an active partition, but it says that no disk were detected. I can run Lubuntu just fine. I can also see all of my Windows 7 files from it EDIT: The startup repair diagnosis and repair log is this: -- Number of repair attempts: 1 Session details System Disk = Windows directory = AutoChk Run = 0 Number of root causes = 1 Test Performed: Name: Check for updates Result: Completed successfully. Error code = 0x0 Time taken = 15ms Test Performed: Name: System disk test Result: Completed successfully. Error code = 0x0 Time taken = 31ms Root cause found. If a hard disk is installed, it is not responding. -- Any chance that this is a result of me doing this through an external dock through a USB drive?

    Read the article

  • Lenovo Windows 8 EFI restore from image

    - by anderhil
    First time here. I have bought e530 with windows 8 and the first hour of work with it i have a problem. I have ssd with windows 7 which i want to use with my new e530. I have made a sysprep of win 7 and installed ssd to the e530. The HDD which was inside e530 i want to use as second hdd instead of my DVD Drive. I connected this HDD through usb-to-sata adapter to copy some files from ssd to the hdd. Unfortunately it didn't see the file system on the HDD (but first time i have booted to it and first boot into Windows 8) I've made some mistakes and i corrupt the filesystem on the hdd. I tried bunch of tricks to recover the GPT, but it didn't work. I have managed to recover the Lenovo_Recovery partition to my ssd using recovery tools. And now I'm stuck, with this new things to me - EFI, GPT, etc i don't how this stuff works, and i have been trying to understand this for hours - but nothing seems to work. I want to restore the Windows 8 to the hdd, so it is there alive. What i have done so far: Formated the HDD I took the PBRALL file from the Lenovo_Recovery " convert gpt create partition Primary size=1000 ID=DE94BBA4-06D1-4D40-A16A-BFD50179D6AC gpt attributes=0x8000000000000001 assign letter=W format quick LABEL=WINRE_DRV create partition efi size=260 assign letter=s format quick fs=fat32 LABEL=SYSTEM_DRV create par msr size=128 create partition primary noerr assign letter=t format quick LABEL=Windows8_OS shrink desired=12197 create partition Primary ID=DE94BBA4-06D1-4D40-A16A-BFD50179D6AC gpt attributes=0x8000000000000001 assign letter=q format quick LABEL=Lenovo_Recovery " it recreated the partitions copied contents of SDRIVE.zip to SYSTEM_DRV partition copied contents of WDRIVE.zip to WINRE_DRV partition Copied restored Lenovo_Recovery back to Lenovo_Recovery partition So now I have 3 system partitions: SYSTE_DRV BOOT boot.sdi EFI BOOT bootx64.efi LenovoBT.efi Lenovo ... Microsoft ... WINRE_DRV\Recovery\WindowsRE\winre.wim Lenovo_Recovery (whic contains install.wim and bunch of other things) So i put back the HDD inside the laptop and tried to boot - but nothing works. It just doesn't boot to anything - no errors - nothing at all. When I choose this HDD manually for boot - just black screen blinks and that's all - it returns back to the devices boot menu. SYSTEM_DRV is EFI partition, so I don't understand why it doesn't boot, it has files needed inside. Can anybody tell me what should be done to make it boot to recovery console or smth like that? How to restore the Windows 8 from the Lenovo_Recovery install.wim image? As I understand I have all the files where they should be, but why it doesn't work? How to troubleshoot such things? Also, if somebody has good link where EFI booting process is explained in details that would be great. Cause i still don't understand how it knows what partition to boot?

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • TreeGridView in VB.NET 3.5

    - by hgulyan
    Hi, I need a control like a TreeView, but with option to use multiple columns in a node. There's a controls called TreeListView on codeproject (link text), but it's doesn't have some features I need. 1) I need a key on every node or somehow bind an object to the control. 2) I need to change node image(like in file systems - folders and files) 3) I need a CheckBox on every node 4) I need path and level of a node. Does anyone know a windows control, that does all this? Thank you.

    Read the article

  • "An internal error occurred." when loading pfx file with X509Certificate2

    - by tmp3128
    I'm trying use self-signed certificate (c#): X509Certificate2 cert = new X509Certificate2(Server.MapPath("~/App_Data/myhost.pfx"), "pass"); on a shared web hosting server and I got an error: System.Security.Cryptography.CryptographicException: An internal error occurred. stack trace ends with System.Security.Cryptography.CryptographicException.ThrowCryptogaphicException(Int32 hr) +33 System.Security.Cryptography.X509Certificates.X509Utils._LoadCertFromFile(String fileName, IntPtr password, UInt32 dwFlags, Boolean persistKeySet, SafeCertContextHandle& pCertCtx) +0 System.Security.Cryptography.X509Certificates.X509Certificate.LoadCertificateFromFile(String fileName, Object password, X509KeyStorageFlags keyStorageFlags) +237 System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(String fileName, String password) +131 On my dev machine it loads ok. The reason I load *.pfx not a *.cer file because I need a private key access (cer file loads Ok). I made pfx on my dev mochine like that: makecert -r -n "CN=myhost.com, [email protected]" -sky exchange -b 01/01/2009 -pe -sv myhost.pvk myhost.cer pvk2pfx -pvk myhost.pvk -spc myhost.cer -pfx myhost.pfx -po pass Please assist; PS: makecert is a v5.131.3790.0

    Read the article

  • Find multiple regex in each line and skip result if one of the regex doesn't match

    - by williamx
    I have a list of variables: variables = ['VariableA', 'VariableB','VariableC'] which I'm going to search for, line by line ifile = open("temp.txt",'r') d = {} match = zeros(len(variables)) for line in ifile: emptyCells=0 for i in range(len(variables)): regex = r'('+variables[i]+r')[:|=|\(](-?\d+(?:\.\d+)?)(?:\))?' pattern_variable = re.compile(regex) match[i] = re.findall(pattern_variable, line) if match[j] == []: emptyCells = emptyCells+1 if emptyCells == 0: for k, v in match[j]: d.setdefault(k, []).append(v) The requirement is that I will only keep the lines where all the regex'es matches! I want to collect all results for each variable in a dictionary where the variable name is the key, and the value becomes a list of all matches. The code provided is only what I've found out so far, and is not working perfectly yet...

    Read the article

  • CFBundleVersion error even though new version is higher

    - by Rob
    When trying to upload an upgrade binary to iTunes Connect, I am getting this error message even though I have updated CFBundleVersion in my info.plist file to a higher value (and checked the package contents after building it to be sure it copied over). I have already successfully submitted a different app upgrade by doing this same method, but for whatever reason it won't work this time. Any insights or is this an iTunes bug? The binary you uploaded was invalid. The key CFBundleVersion in the Info.plist file must contain a higher version than that of the previously uploaded version.

    Read the article

  • Cannot build/create dll in Visual Studio because of admin rights??

    - by Vidar
    I have local admin rights on the PC I am using - but some things are not allowed i.e. I can't right click on My Computer and see the properties! Anyway I was trying to build my solution in Visual Studio - I just added a simple class library with barely any code in it and it won't build, it gives the error: C:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3019,9): error MSB3216: Cannot register assembly "C:\Documents and Settings\fooUser\My Documents\Visual Studio 2008\Projects\FooSolution\Foo.Bar\bin\Debug\Foo.Bar.Utility.dll" - access denied. Please make sure you're running the application as administrator. Access to the registry key 'HKEY_CLASSES_ROOT\Record' is denied. I take it I need FULL admin rights?? - although the project did compile before I made this class library - so I dont' understand why the addition of this library has had such a drastic effect now!

    Read the article

  • Entity Framework 4 mapping fragment error when adding new entity scalar

    - by Jason Morse
    I have an Entity Framework 4 model-first design. I create a first draft of my model in the designer and all was well. I compiled, generated database, etc. Later on I tried to add a string scalar (Nullable = true) to one of my existing entities and I keep getting this type of error when I compile: Error 3004: Problem in mapping fragments starting at line 569: No mapping specified for properties MyEntity.MyValue in Set MyEntities. An Entity with Key (PK) will not round-trip when: Entity is type [MyEntities.MyEntity] I keep having to manually open the EDMX file and correct the XML whenever I add scalars. Ideas on what's going on?

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Accessing Instance Attributes from Secondary Thread (iPhone-SDK)

    - by Travis
    I have a class with an NSDictionary attribute. Inside this class I dispatch another thread to handle NSXMLParser handling. Inside my -didStartElement, I access the dictionary in the class (to compare an element found in the XML to one in the dictionary). At this point I get undefined results. Using NSLog (I'm not advanced in XCode debugging), I see that it bombs around access of the NSDictionary. I tried just iterating the dictionary and dumping the key/values inside the didStartElement and this bombs at different keys each time. The only thing I can conclude is that something is not kosher that I'm doing with regards to accessing main thread attributes from the secondary thread. I'm somewhat new to multithreading and am not sure what the best protocol is safely access attributes from additional threads. Thanks all.

    Read the article

  • Map a Network Drive from XP to Windows 7

    - by Mysticgeek
    We’ve received a lot of questions about mapping a drive from XP to Windows 7 to access data easily. Today we look at how to map a drive in Windows 7, and how to map to an XP drive from Windows 7. With the new Homegroup feature in Windows 7, it makes sharing data between computers a lot easier. But you might need to map a network drive so you can go directly into a folder to access its contents. Mapping a network drive may sound like “IT talk”, but the process is fairly easy. Map Network Drive in Windows 7 Note: All of the computers used in this article are part of the same workgroup on a home network. In this first example we’re mapping to another Windows 7 drive on the network. Open Computer and from the toolbar click on Map Network Drive. Alternately in Computer you can hit “Alt+T” to pull up the toolbar and click on Tools \ Map Network Drive. Now give it an available drive letter, type in the path or browse to the folder you want to map to. Check the box next to Reconnect at logon if you want it available after a reboot, and click Finish. If both machines aren’t part of the same Homegroup, you may be prompted to enter in a username and password. Make sure and check the box next to Remember my credentials if you don’t want to log in every time to access it. The drive will map and the contents of the folder will open up. When you look in Computer, you’ll see the drive under network location. This process works if you want to connect to a server drive as well. In this example we map to a Home Server drive. Map an XP Drive to Windows 7 There might be times when you need to map a drive on an XP machine on your network. There are extra steps you’ll need to take to make it work however. Here we take a look at the problem you’ll encounter when trying to map to an XP machine if things aren’t set up correctly. If you try to browse to your XP machine you’ll see a message that you don’t have permission. Or if you try to enter in the path directly, you’ll be prompted for a username and password, and the annoyance is, no matter what credentials you put in, you can’t connect. To solve the problem we need to set up the Windows 7 machine as a user on the XP machine and make them part of the Administrators group. Right-click My Computer and select Manage. Under Computer Management expand Local Users and Groups and click on the Users folder. Right-click an empty area and click New User. Add in the user credentials, uncheck User must change password at next logon, then check Password never expires then click Create. Now you see the new user you created in the list. After the user is added you might want to reboot before proceeding to the next step.   Next we need to make the user part of the Administrators group. So go back into Computer Management \ Local Users and Groups \ Groups then double click on Administrators. Click the Add button in Administrators Properties window. Enter in the new user you created and click OK. An easy way to do this is to enter the name of the user you created then click Check Names and the path will be entered in for you. Now you see the user as a member of the Administrators group. Back on the Windows 7 machine we’ll start the process of mapping a drive. Here we’re browsing to the XP Media Center Edition machine. Now we can enter in the user name and password we just created. If you only want to access specific shared folders on the XP machine you can browse to them. Or if you want to map to the entire drive, enter in the drive path where in this example it’s “\\XPMCE\C$” –Don’t forget the “$” sign after the local drive letter. Then login… Again the contents of the drive will open up for you to access. Here you can see we have two drives mapped. One to another Windows 7 machine on the network, and the other one to the XP computer.   If you ever want to disconnect a drive, just right-click on it and then Disconnect. There are several scenarios where you might want to map a drive in Windows 7 to access specific data. It takes a little bit of work but you can map to an XP drive from Windows 7 as well. This comes in handy where you have a network with different versions of Windows running on it. Similar Articles Productive Geek Tips Find Your Missing USB Drive on Windows XPMake Vista Index Your Network ConnectionsEasily Backup & Import Your Wireless Network Settings in Windows 7Quickly Open Network Connections List in Windows 7 or VistaHow To Find Drives Easily with Desk Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher

    Read the article

  • Defining multiple values in DefineConstants in MsBuild element?

    - by Sardaukar
    I'm currently integrating my Wix projects in MSBuild. It is necessary for me to pass multiple values to the Wix project. One value will work (ProductVersion in the sample below). <Target Name="BuildWixSetups"> <MSBuild Condition="'%(WixSetups.Identity)'!=''" Projects="%(WixSetups.Identity)" Targets="Rebuild" Properties="Configuration=Release;OutputPath=$(OutDir);DefineConstants=ProductVersion=%(WixSetups.ISVersion)" ContinueOnError="true"/> </Target> However, how do I pass multiple values to the DefineConstants key? I've tried all the 'logical' separators (space, comma, semi-colon, pipe-symbol), but this doesn't work. Has someone else come across this problem? Solutions that don't work: Trying to add a DefineConstants element does not work because DefineConstants needs to be expressed within the Properties attribute.

    Read the article

  • OpenRemoteBaseKey() credentials

    - by sgibbons
    I'm attempting to use powershell to access a remote registry like so: $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey("LocalMachine", $server) $key = $reg.OpenSubkey($subkeyPath) Depending on some factors that I'm not yet able to determine I either get Exception calling "OpenSubKey" with "1" argument(s): "Requested registry access is not allowed." Or System.UnauthorizedAccessException: Attempted to perform an unauthorized operation. at Microsoft.Win32.RegistryKey.Win32ErrorStatic(Int32 errorCode, String str) at Microsoft.Win32.RegistryKey.OpenRemoteBaseKey(RegistryHive hKey, String machineName) It seems pretty clear that this is because the user I'm running the powershell script as doesn't have the appropriate credentials to access the remote registry. I'd like to be able to supply a set of credentials to use for the remote registry access, but I can find no documentation anywhere of a way to do this. I'm also not clear on exactly where to specify which users are allowed to access the registry remotely.

    Read the article

  • DomainService method not compiling; claims "Return types must be an entity ..."

    - by Duncan Bayne
    I have a WCF RIA Domain Service that contains a method I'd like to invoke when the user clicks a button: [Invoke] public MyEntity PerformAnalysis(int someId) { return new MyEntity(); } However, when I try to compile I'm given the following error: Operation named 'PerformAnalysis' does not conform to the required signature. Return types must be an entity, collection of entities, or one of the predefined serializable types. The thing is, as far as I can tell, MyEntity is an entity: [Serializable] public class MyEntity: EntityObject, IMyEntity { [Key] [DataMember] [Editable(false)] public int DummyKey { get; set; } [DataMember] [Editable(false)] public IEnumerable<SomeOtherEntity> Children { get; set; } } I figure I'm missing something simple here. Could someone please tell me how I can create an invokable method that returns a single MyEntity object?

    Read the article

  • "Could not authenticate you." -error when using Twitter OAuth.

    - by Martti Laine
    Hello I'm building my first system using Twitters OAuth and have some issues. First, I'm using Abraham's Twitter-class for this and I have followed this tutorial. However, I get these lines on my callback.php: Warning: array_merge() [function.array-merge]: Argument #2 is not an array in C:\xampp\htdocs\twitter\twitterOAuth\OAuth.php on line 301 Warning: strtoupper() expects parameter 1 to be string, array given in C:\xampp\htdocs\twitter\twitterOAuth\OAuth.php on line 373 Oops - an error has occurred. SimpleXMLElement Object ( [request] => /account/verify_credentials.xml [error] => Could not authenticate you. ) Is this problem by Twitter-class, or am I doing something wrong? I have my Consumer Key and Consumer Secret in config.php as tutorial says, but should I store something else? Martti Laine

    Read the article

  • vir-install virtual machine hang on Probbing EDD

    - by user2256235
    I'm using vir-stall virtual machine, and my command is virt-install --name=gust --vcpus=4 --ram=8192 --network bridge:br0 --cdrom=/opt/rhel-server-6.2-x86_64-dvd.iso --disk path=/opt/as1/as1.img,size=50 --accelerate After running the command, it hangs on probing EDD, - Press the <ENTER> key to begin the installation process. +----------------------------------------------------------+ | Welcome to Red Hat Enterprise Linux 6.2! | |----------------------------------------------------------| | Install or upgrade an existing system | | Install system with basic video driver | | Rescue installed system | | Boot from local drive | | Memory test | | | | | | | | | | | | | | | +----------------------------------------------------------+ Press [Tab] to edit options Automatic boot in 57 seconds... Loading vmlinuz...... Loading initrd.img...............................ready. Probing EDD (edd=off to disable)... ok ÿ Previously, I wait a long time, it seems no marching. After I press ctrl + ] and stop it. I find it was running using virsh list, but I cannot console it using virsh concole gust. Any problem and how should I do. Many Thanks

    Read the article

  • JsTree v1.0 - How to manipulate effectively the data from the backend to render the trees and operate correctly?

    - by Jean Paul
    Backend info: PHP 5 / MySQL URL: http://github.com/downloads/vakata/jstree/jstree_pre1.0_fix_1.zip Table structure for table discussions_tree -- CREATE TABLE IF NOT EXISTS `discussions_tree` ( `id` int(11) NOT NULL AUTO_INCREMENT, `parent_id` int(11) NOT NULL DEFAULT '0', `user_id` int(11) NOT NULL DEFAULT '0', `label` varchar(16) DEFAULT NULL, `position` bigint(20) unsigned NOT NULL DEFAULT '0', `left` bigint(20) unsigned NOT NULL DEFAULT '0', `right` bigint(20) unsigned NOT NULL DEFAULT '0', `level` bigint(20) unsigned NOT NULL DEFAULT '0', `type` varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL, `h_label` varchar(16) NOT NULL DEFAULT '', `fulllabel` varchar(255) DEFAULT NULL, UNIQUE KEY `uidx_3` (`id`), KEY `idx_1` (`user_id`), KEY `idx_2` (`parent_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; /*The first element should in my understanding not even be shown*/ INSERT INTO `discussions_tree` (`id`, `parent_id`, `user_id`, `label`, `position`, `left`, `right`, `level`, `type`, `h_label`, `fulllabel`) VALUES (0, 0, 0, 'Contacts', 0, 1, 1, 0, NULL, '', NULL); INSERT INTO `discussions_tree` (`id`, `parent_id`, `user_id`, `label`, `position`, `left`, `right`, `level`, `type`, `h_label`, `fulllabel`) VALUES (1, 0, 0, 'How to Tag', 1, 2, 2, 0, 'drive', '', NULL); Front End : I've simplified the logic, it has 6 trees actually inside of a panel and that works fine $array = array("Discussions"); $id_arr = array("d"); $nid = 0; foreach ($array as $k=> $value) { $nid++; ?> <li id="<?=$value?>" class="label"> <a href='#<?=$value?>'><span> <?=$value?> </span></a> <div class="sub-menu" style="height:auto; min-height:120px; background-color:#E5E5E5" > <div class="menu" id="menu_<?=$id_arr[$k]?>" style="position:relative; margin-left:56%"> <img src="./js/jsTree/create.png" alt="" id="create" title="Create" > <img src="./js/jsTree/rename.png" alt="" id="rename" title="Rename" > <img src="./js/jsTree/remove.png" alt="" id="remove" title="Delete"> <img src="./js/jsTree/cut.png" alt="" id="cut" title="Cut" > <img src="./js/jsTree/copy.png" alt="" id="copy" title="Copy"> <img src="./js/jsTree/paste.png" alt="" id="paste" title="Paste"> </div> <div id="<?=$id_arr[$k]?>" class="jstree_container"></div> </div> </li> <!-- JavaScript neccessary for this tree : <?=$value?> --> <script type="text/javascript" > jQuery(function ($) { $("#<?=$id_arr[$k]?>").jstree({ // List of active plugins used "plugins" : [ "themes", "json_data", "ui", "crrm" , "hotkeys" , "types" , "dnd", "contextmenu"], // "ui" :{ "initially_select" : ["#node_"+ $nid ] } , "crrm": { "move": { "always_copy": "multitree" }, "input_width_limit":128 }, "core":{ "strings":{ "new_node" : "New Tag" }}, "themes": {"theme": "classic"}, "json_data" : { "ajax" : { "url" : "./js/jsTree/server-<?=$id_arr[$k]?>.php", "data" : function (n) { // the result is fed to the AJAX request `data` option return { "operation" : "get_children", "id" : n.attr ? n.attr("id").replace("node_","") : 1, "state" : "", "user_id": <?=$uid?> }; } } } , "types" : { "max_depth" : -1, "max_children" : -1, "types" : { // The default type "default" : { "hover_node":true, "valid_children" : [ "default" ], }, // The `drive` nodes "drive" : { // can have files and folders inside, but NOT other `drive` nodes "valid_children" : [ "default", "folder" ], "hover_node":true, "icon" : { "image" : "./js/jsTree/root.png" }, // those prevent the functions with the same name to be used on `drive` nodes.. internally the `before` event is used "start_drag" : false, "move_node" : false, "remove_node" : false } } }, "contextmenu" : { "items" : customMenu , "select_node": true} }) //Hover function binded to jstree .bind("hover_node.jstree", function (e, data) { $('ul li[rel="drive"], ul li[rel="default"], ul li[rel=""]').each(function(i) { $(this).find("a").attr('href', $(this).attr("id")+".php" ); }) }) //Create function binded to jstree .bind("create.jstree", function (e, data) { $.post( "./js/jsTree/server-<?=$id_arr[$k]?>.php", { "operation" : "create_node", "id" : data.rslt.parent.attr("id").replace("node_",""), "position" : data.rslt.position, "label" : data.rslt.name, "href" : data.rslt.obj.attr("href"), "type" : data.rslt.obj.attr("rel"), "user_id": <?=$uid?> }, function (r) { if(r.status) { $(data.rslt.obj).attr("id", "node_" + r.id); } else { $.jstree.rollback(data.rlbk); } } ); }) //Remove operation .bind("remove.jstree", function (e, data) { data.rslt.obj.each(function () { $.ajax({ async : false, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "remove_node", "id" : this.id.replace("node_",""), "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { data.inst.refresh(); } } }); }); }) //Rename operation .bind("rename.jstree", function (e, data) { data.rslt.obj.each(function () { $.ajax({ async : true, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "rename_node", "id" : this.id.replace("node_",""), "label" : data.rslt.new_name, "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { data.inst.refresh(); } } }); }); }) //Move operation .bind("move_node.jstree", function (e, data) { data.rslt.o.each(function (i) { $.ajax({ async : false, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "move_node", "id" : $(this).attr("id").replace("node_",""), "ref" : data.rslt.cr === -1 ? 1 : data.rslt.np.attr("id").replace("node_",""), "position" : data.rslt.cp + i, "label" : data.rslt.name, "copy" : data.rslt.cy ? 1 : 0, "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { $.jstree.rollback(data.rlbk); } else { $(data.rslt.oc).attr("id", "node_" + r.id); if(data.rslt.cy && $(data.rslt.oc).children("UL").length) { data.inst.refresh(data.inst._get_parent(data.rslt.oc)); } } } }); }); }); // This is for the context menu to bind with operations on the right clicked node function customMenu(node) { // The default set of all items var control; var items = { createItem: { label: "Create", action: function (node) { return {createItem: this.create(node) }; } }, renameItem: { label: "Rename", action: function (node) { return {renameItem: this.rename(node) }; } }, deleteItem: { label: "Delete", action: function (node) { return {deleteItem: this.remove(node) }; }, "separator_after": true }, copyItem: { label: "Copy", action: function (node) { $(node).addClass("copy"); return {copyItem: this.copy(node) }; } }, cutItem: { label: "Cut", action: function (node) { $(node).addClass("cut"); return {cutItem: this.cut(node) }; } }, pasteItem: { label: "Paste", action: function (node) { $(node).addClass("paste"); return {pasteItem: this.paste(node) }; } } }; // We go over all the selected items as the context menu only takes action on the one that is right clicked $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element) { if ( $(element).attr("id") != $(node).attr("id") ) { // Let's deselect all nodes that are unrelated to the context menu -- selected but are not the one right clicked $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); } }); //if any previous click has the class for copy or cut $("#<?=$id_arr[$k]?>").find("li").each(function(index,element) { if ($(element) != $(node) ) { if( $(element).hasClass("copy") || $(element).hasClass("cut") ) control=1; } else if( $(node).hasClass("cut") || $(node).hasClass("copy")) { control=0; } }); //only remove the class for cut or copy if the current operation is to paste if($(node).hasClass("paste") ) { control=0; // Let's loop through all elements and try to find if the paste operation was done already $("#<?=$id_arr[$k]?>").find("li").each(function(index,element) { if( $(element).hasClass("copy") ) $(this).removeClass("copy"); if ( $(element).hasClass("cut") ) $(this).removeClass("cut"); if ( $(element).hasClass("paste") ) $(this).removeClass("paste"); }); } switch (control) { //Remove the paste item from the context menu case 0: switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } break; //Remove the paste item from the context menu only on the node that has either copy or cut added class case 1: if( $(node).hasClass("cut") || $(node).hasClass("copy") ) { switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } } else //Re-enable it on the clicked node that does not have the cut or copy class { switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; break; } } break; //initial state don't show the paste option on any node default: switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } break; } return items; } $("#menu_<?=$id_arr[$k]?> img").hover( function () { $(this).css({'cursor':'pointer','outline':'1px double teal'}) }, function () { $(this).css({'cursor':'none','outline':'1px groove transparent'}) } ); $("#menu_<?=$id_arr[$k]?> img").click(function () { switch(this.id) { //Create only the first element case "create": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("create", '#'+$(element).attr("id"), null, /*{attr : {href: '#' }}*/null ,null, false); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //REMOVE case "remove": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ //only execute if the current node is not the first one (drive) if( $(element).attr("id") != $("div.jstree > ul > li").first().attr("id") ) { $("#<?=$id_arr[$k]?>").jstree("remove",'#'+$(element).attr("id")); } else $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //RENAME NODE only one selection case "rename": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ if( $(element).attr("id") != $("div.jstree > ul > li").first().attr("id") ) { switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("rename", '#'+$(element).attr("id") ); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } } else $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //Cut case "cut": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("cut", '#'+$(element).attr("id")); $.facebox('<p class=\'p_inner teal\'>Operation "Cut" successfully done.<p class=\'p_inner teal bold\'>Where to place it?'); setTimeout(function(){ $.facebox.close(); $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id")); }, 2000); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //Copy case "copy": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("copy", '#'+$(element).attr("id")); $.facebox('<p class=\'p_inner teal\'>Operation "Copy": Successfully done.<p class=\'p_inner teal bold\'>Where to place it?'); setTimeout(function(){ $.facebox.close(); $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }, 2000); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; case "paste": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("paste", '#'+$(element).attr("id")); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; } }); <? } ?> server.php $path='../../../..'; require_once "$path/phpfoo/dbif.class"; require_once "$path/global.inc"; // Database config & class $db_config = array( "servername"=> $dbHost, "username" => $dbUser, "password" => $dbPW, "database" => $dbName ); if(extension_loaded("mysqli")) require_once("_inc/class._database_i.php"); else require_once("_inc/class._database.php"); //Tree class require_once("_inc/class.ctree.php"); $dbLink = new dbif(); $dbErr = $dbLink->connect($dbName,$dbUser,$dbPW,$dbHost); $jstree = new json_tree(); if(isset($_GET["reconstruct"])) { $jstree->_reconstruct(); die(); } if(isset($_GET["analyze"])) { echo $jstree->_analyze(); die(); } $table = '`discussions_tree`'; if($_REQUEST["operation"] && strpos($_REQUEST["operation"], "_") !== 0 && method_exists($jstree, $_REQUEST["operation"])) { foreach($_REQUEST as $k => $v) { switch($k) { case 'user_id': //We are passing the user_id from the $_SESSION on each request and trying to pick up the min and max value from the table that matches the 'user_id' $sql = "SELECT max(`right`) , min(`left`) FROM $table WHERE `user_id`=$v"; //If the select does not return any value then just let it be :P if (!list($right, $left)=$dbLink->getRow($sql)) { $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v WHERE `id` = 1 AND `parent_id` = 0"); $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v WHERE `parent_id` = 1 AND `label`='How to Tag' "); } else { $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v, `right`=$right+2 WHERE `id` = 1 AND `parent_id` = 0"); $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v, `left`=$left+1, `right`=$right+1 WHERE `parent_id` = 1 AND `label`='How to Tag' "); } break; } } header("HTTP/1.0 200 OK"); header('Content-type: application/json; charset=utf-8'); header("Cache-Control: no-cache, must-revalidate"); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Pragma: no-cache"); echo $jstree->{$_REQUEST["operation"]}($_REQUEST); die(); } header("HTTP/1.0 404 Not Found"); ?> The problem: DND *(Drag and Drop) works, Delete works, Create works, Rename works, but Copy, Cut and Paste don't work

    Read the article

  • SqlCeException: The column cannot be modified. [ Column name = id ]

    - by pek
    I am simply trying to add a single row in the database but I keep getting an exception. I created a local database and added a single table: users. It consists of two columns: "id" and "name". I only made the id primary key (not auto-increment or anything else). When I run the following code: string execPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().GetName().CodeBase); string dbfile = execPath + @"\LocalDatabase.sdf"; SqlCeConnection conn = new SqlCeConnection("datasource=" + dbfile); conn.Open(); string command = "INSERT INTO users VALUES('1','pek')"; Debug.WriteLine(command); SqlCeCommand comm = conn.CreateCommand(); comm.CommandText = command; comm.ExecuteNonQuery(); I get the following Exception at "comm.ExecuteNonQuery();": SqlCeException was unhandled The column cannot be modified. [ Column name = id ] What's with the "modified" part?

    Read the article

  • How do I delete multiple rows in Entity Framework (without foreach)

    - by Jon Galloway
    I'm deleting several items from a table using Entity Framework. There isn't a foreign key / parent object so I can't handle this with OnDeleteCascade. Right now I'm doing this: var widgets = context.Widgets .Where(w => w.WidgetId == widgetId); foreach (Widget widget in widgets) { context.Widgets.DeleteObject(widget); } context.SaveChanges(); It works but the foreach bugs me. I'm using EF4 but I don't want to execute SQL. I just want to make sure I'm not missing anything - this is as good as it gets, right? I can abstract it with an extension method or helper, but somewhere we're still going to be doing a foreach, right?

    Read the article

  • ProtectedData.Unprotect() after Impersonate()

    - by Andrey
    The following code doesn't work: IntPtr token = Win32Dll.LogonUser(“user1”, “mydomain”, “password1”); WindowsIdentity id = new WindowsIdentity(token); WindowsImpersonationContext ic = id.Impersonate(); byte[] unprotectedBytes = ProtectedData.Unprotect(passwordBytes, null, DataProtectionScope.CurrentUser); password = Encoding.Unicode.GetString(unprotectedBytes); ic.Undo(); The password is not decrypted. MSDN said "If you use this method during impersonation, you may receive the following error: "Key not valid for use in specified state." This error can be prevented by loading the profile of the user you want to impersonate, before calling the method." I would be very grateful for the help!

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >