Search Results

Search found 13068 results on 523 pages for 'copy paste'.

Page 358/523 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • Sql Server - INSERT INTO SELECT to avoid duplicates

    - by Ashish Gupta
    I have following two tables:- Table1 ------------- ID Name 1 A 2 B 3 C Table2 -------- ID Name 1 Z I need to insert data from Table1 to Table2 and I can use following sytax for the same:- INSERT INTO Table2(Id, Name) SELECT Id, Name FROM Table1 However, In my case duplicate Ids might exist in Table2 (In my case Its Just "1") and I dont want to copy that again as that would throw an error. I can write something like this:- IF NOT EXISTS(SELECT 1 FROM Table2 WHERE Id=1) INSERT INTO Table2 (Id, name) SELECT Id, name FROM Table1 ELSE INSERT INTO Table2 (Id, name) SELECT Id, name FROM Table1 WHERE Table1.Id<>1 Is there a better way to do this without using IF - ELSE? I want to avoid two INSERT INTO-SELECT statements based on some condition. Any help is appreciated.

    Read the article

  • A JTAG emulator for use with a Hawkboard and OpenOCD?

    - by David Brown
    I'd like to try bare metal ARM programming with the Hawkboard, but the deployment process looks awful. I'm totally new to this, so I could be misunderstanding the instructions, but it appears that I have to use a program called AISgen to convert the binary file, then boot with u-Boot over UART and copy the AIS binary into memory. Not only is that a lot of stuff to do every time I make a change, it also doesn't give me the ability to debug with GDB. The best solution for this that I can find is JTAG. But the prices for these JTAG emulators look ridiculous. I'm not even sure which ones will work with the Hawkboard and which ones won't. So far, my best bet appears to be the Flyswatter, but the pin layout is different. Basically, I need something that's relatively cheap and works with the Hawkboard and OpenOCD. Any suggestions? Or is there another way I could do this, perhaps?

    Read the article

  • django media url is not resolved in 500 internal server error template

    - by Tom Tom
    Hi, I'm using a 500.html template for my app, which is an identical copy of the 404.html with some minor text changes. Interestingly the {{ media_url }} context variable will not be resolved by the server if the 500.html is presented (e.g. when I force an internal server error), resulting in a page without any css loaded. An easy way to circumvent this would be to hardcode the links to the css, but I m just curious why the media_url is not resolved. Probably it is because the server encounters a internal server error and that leads to context variables not any more available!?

    Read the article

  • Word 2007 Question

    - by Lijo
    Hi Team, While preparing a Word 2007 document, I made a mistake. (Not to say I don't have any other copy of the document) While formatting (as a try) I applied the style "Apply Style to Body to match selection". This caused the document to go totally in a wronfg format - having numbers even in tables. Have you ever faced this? Could you please tell how to correct it? Hope you would be kind enough to answer this even though it is not striclty technical. Thanks Lijo

    Read the article

  • Feeding a Drill Down Menu with categories, subcategories and subSubcategories from a database

    - by Hassan
    Hi everyone, I have a Drill Down menu and I want to have it gets its elements from a database, I am using php and MySQL and the table (categories) looks like this : http://yfrog.com/jctablehsj I can't figure out how I can extract these information in a way I could put it inside the Drill Down Menu ! I found the recursive method (with LEFT JOINs) and the nested method which I barely understood and again I couldn't apply it to the Drill Down Menu. I found that some people found out a solution with left join and group by but couldn't understand or copy their example ! I would be more than gratefull if you could give me the extact of the query. Thanks a lot for your hard work, Hassan

    Read the article

  • Remove files from Bazaar

    - by Kristopher Ives
    I'm using Bazaar (bzr) to keep source code for a website updated, but we've ran into a problem when we remove files from version control. The files we are removing are ones we never intended to version to begin with. When this happens we use bzr rm --keep to remove the file from version control, but keep the file in the file system. Doing a bzr push or bzr pull results in the removed file(s) being removed on the other branches (other sites that use our code) We need a way to make sure that a bzr push or bzr pull doesn't actually remove those from the working copy. Anyone have any ideas?

    Read the article

  • Amazon S3 Add METADATA to existing KEY

    - by Daveo
    In S3 REST API I am adding metadata to an existing object by using the PUT (Copy) command and copying a key to the same location with 'x-amz-metadata-directive' = 'REPLACE' What I want to do is change the download file name by setting: Content-Disposition: attachment; filename=foo.bar; This sets the metadata correctly but when I download the file it still uses the keyname instead of 'foo.bar' I use a software tool S3 Browser to view the metadata and it looks correct (apart from 'Content-Disposition' being all lower case as that's was S3 ask me to sign) Then using S3 Browser I just pressed, then save without changing anything and now it works??? What am I missing how come setting a metadata 'Content-Disposition: attachment; filename=foo.bar;' from my web app does not work but does work from S3 Browser?

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • Idiom vs. pattern

    - by Roger Pate
    In the context of programming, how do idioms differ from patterns? I use the terms interchangeably and normally follow the most popular way I've heard something called, or the way it was called most recently in the current conversation, e.g. "the copy-swap idiom" and "singleton pattern". The best difference I can come up with is code which is meant to be copied almost literally is more often called pattern while code meant to be taken less literally is more often called idiom, but such isn't even always true. This doesn't seem to be more than a stylistic or buzzword difference. Does that match your perception of how the terms are used? Is there a semantic difference?

    Read the article

  • VS2010 MVC and Entity Framework Model in Separate Project

    - by mdm
    Hi, I am trying to use an Entity Framework Model (in separate project) into an asp.net 4 MVC project (VS2010, C#) If I create the EF inside the MVC project I have no problems. I think I am missing some step. things done: 1. added reference to the EF class project 2. added connection string in MVC web.config 3. added reference to System.Data.Entity in both web.config and project references Now i can use the model only if I copy the .edmx file to the Models folder, but in this way the EF project is not external anymore. What am I missing? Thank you in advance.

    Read the article

  • sql server 2008 express one row write problem

    - by bojanskr
    Hi everyone, I have the most bizarre problem(at least it is bizarre to me) with MSSQL Server Express 2008. The problem is the following: On the development machine I use MS SQL Server 2008 Enterprise....I get some data from a WCF service and write that data to the db (simple as it can be)....I should point out however that the writing, it is done in a separate thread. BUt, anyway no problems during development...all the data is there. Then I set everything up(connection strings .\SQLEXPRESS, other settings) build in Release and copy that to a test machine that has MS SQL Server Express installed(because my application is a client application and it should work with Express)...I run the program....the program retrieves the data from the service...and when I look at the database...I'm in for a big suprise...there's only one row written(the first row received from the WCF service). I would really appreciate any help...I'm in a deadlock here. Thanks in advance. Bojan

    Read the article

  • DataSource for Tomcat web app, Spring and Hibernate

    - by EugeneP
    Web app runs on Tomcat. Datasource is configured with Spring configuration, and is used by Hibernate. If we cannot use JNDI, what would you suggest to use as a DataSource? org.springframework.jdbc.datasource.DriverManagerDataSource will be ok? It's not very good, but sincerely speaking, it can be used on production server, right? Just a bit of headache with too frequent connection reopening. Also, we can use BasicDataSource from Apache. It's much better of course, but here's the question. IF WE DON'T USE JNDI, THEN: If every instance of an app will create its own copy of a DataSource, and every DataSource can have 5 open connections, what do we get? Num_of_running_apps * Num_of_max_active_connections = max active open connection on a DB for this user? Second question: from the perspective of Hibernate, is there any difference about what datasource implementation is used? Will it work with no matter what datasource perfectly and in a stable way?

    Read the article

  • How to add correct cancellation when downloading a file with the example in the samples of the new P

    - by Mike
    Hello everybody, I have downloaded the last samples of the Parallel Programming team, and I don't succeed in adding correctly the possibility to cancel the download of a file. Here is the code I ended to have: var wreq = (HttpWebRequest)WebRequest.Create(uri); // Fire start event DownloadStarted(this, new DownloadStartedEventArgs(remoteFilePath)); long totalBytes = 0; wreq.DownloadDataInFileAsync(tmpLocalFile, cancellationTokenSource.Token, allowResume, totalBytesAction => { totalBytes = totalBytesAction; }, readBytes => { Log.Debug("Progression : {0} / {1} => {2}%", readBytes, totalBytes, 100 * (double)readBytes / totalBytes); DownloadProgress(this, new DownloadProgressEventArgs(remoteFilePath, readBytes, totalBytes, (int)(100 * readBytes / totalBytes))); }) .ContinueWith( (antecedent ) => { if (antecedent.IsFaulted) Log.Debug(antecedent.Exception.Message); //Fire end event SetEndDownload(antecedent.IsCanceled, antecedent.Exception, tmpLocalFile, 0); }, cancellationTokenSource.Token); I want to fire an end event after the download is finished, hence the ContinueWith. I slightly changed the code of the samples to add the CancellationToken and the 2 delegates to get the size of the file to download, and the progression of the download: return webRequest.GetResponseAsync() .ContinueWith(response => { if (totalBytesAction != null) totalBytesAction(response.Result.ContentLength); response.Result.GetResponseStream().WriteAllBytesAsync(filePath, ct, resumeDownload, progressAction).Wait(ct); }, ct); I had to add the call to the Wait function, because if I don't, the method exits and the end event is fired too early. Here are the modified method extensions (lot of code, apologies :p) public static Task WriteAllBytesAsync(this Stream stream, string filePath, CancellationToken ct, bool resumeDownload = false, Action<long> progressAction = null) { if (stream == null) throw new ArgumentNullException("stream"); // Copy from the source stream to the memory stream and return the copied data return stream.CopyStreamToFileAsync(filePath, ct, resumeDownload, progressAction); } public static Task CopyStreamToFileAsync(this Stream source, string destinationPath, CancellationToken ct, bool resumeDownload = false, Action<long> progressAction = null) { if (source == null) throw new ArgumentNullException("source"); if (destinationPath == null) throw new ArgumentNullException("destinationPath"); // Open the output file for writing var destinationStream = FileAsync.OpenWrite(destinationPath); // Copy the source to the destination stream, then close the output file. return CopyStreamToStreamAsync(source, destinationStream, ct, progressAction).ContinueWith(t => { var e = t.Exception; destinationStream.Close(); if (e != null) throw e; }, ct, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Current); } public static Task CopyStreamToStreamAsync(this Stream source, Stream destination, CancellationToken ct, Action<long> progressAction = null) { if (source == null) throw new ArgumentNullException("source"); if (destination == null) throw new ArgumentNullException("destination"); return Task.Factory.Iterate(CopyStreamIterator(source, destination, ct, progressAction)); } private static IEnumerable<Task> CopyStreamIterator(Stream input, Stream output, CancellationToken ct, Action<long> progressAction = null) { // Create two buffers. One will be used for the current read operation and one for the current // write operation. We'll continually swap back and forth between them. byte[][] buffers = new byte[2][] { new byte[BUFFER_SIZE], new byte[BUFFER_SIZE] }; int filledBufferNum = 0; Task writeTask = null; int readBytes = 0; // Until there's no more data to be read or cancellation while (true) { ct.ThrowIfCancellationRequested(); // Read from the input asynchronously var readTask = input.ReadAsync(buffers[filledBufferNum], 0, buffers[filledBufferNum].Length); // If we have no pending write operations, just yield until the read operation has // completed. If we have both a pending read and a pending write, yield until both the read // and the write have completed. yield return writeTask == null ? readTask : Task.Factory.ContinueWhenAll(new[] { readTask, writeTask }, tasks => tasks.PropagateExceptions()); // If no data was read, nothing more to do. if (readTask.Result <= 0) break; readBytes += readTask.Result; if (progressAction != null) progressAction(readBytes); // Otherwise, write the written data out to the file writeTask = output.WriteAsync(buffers[filledBufferNum], 0, readTask.Result); // Swap buffers filledBufferNum ^= 1; } } So basically, at the end of the chain of called methods, I let the CancellationToken throw an OperationCanceledException if a Cancel has been requested. What I hoped was to get IsFaulted == true in the appealing code and to fire the end event with the canceled flags and the correct exception. But what I get is an unhandled exception on the line response.Result.GetResponseStream().WriteAllBytesAsync(filePath, ct, resumeDownload, progressAction).Wait(ct); telling me that I don't catch an AggregateException. I've tried various things, but I don't succeed to make the whole thing work properly. Does anyone of you have played enough with that library and may help me? Thanks in advance Mike

    Read the article

  • executing two functions with wshshell

    - by sushant
    i have two different functions (copy and zip) to b executed. can i do it with with a single wshshell script.i tried---- Dim WshShell, oExec,g,h h="D:\d" g="xcopy " & h & " " & "D:\y\ /E & cmd /c cd D:\c & D: & winzip32.exe -min -a D:\a" Set WshShell = CreateObject("WScript.Shell") Set oExec = WshShell.Exec(g) Do While oExec.Status = 0 WScript.Sleep 100 Loop WScript.Echo oExec.Status it dint work.though separate programs i.e g="xcopy " & h & " " & "D:\y\ /E" and g="cmd /c cd D:\d & D: & winzip32.exe -min -a D:\a" works. i am sorry for the formatting problem. any help is appreciated.

    Read the article

  • Tools to backup an external hard disk

    - by Kaushik Gopal
    Hey people, What's the best method to take an exact copy of my external hard disk? A guru suggested rsync, but I was wondering if there's an easier alternative. I do remember reading somewhere that Acronis also does this. Was looking for your advice on the best option. I'm running Windows. Essentially i have an external HDD which has a lot of stuff synchronized across various pcs. I wish to take a backup of this external Hard disk (ext.HDDs aren't entirely reliable so want to keep a backup of my ext.HDD). Cheers. K

    Read the article

  • Configuring gmail for use on mailing lists

    - by reemrevnivek
    This is really two questions in one. First, are nettiquette guidelines still accurate in their restrictions on ASCII vs. HTML, posting style, and line length? (Here's a recent metafilter discussion of the topic.) Second, If they are not, should these guidelines be respected? If they are (or if they should still be respected), how can modern mail programs be configured to work properly with them? Most mailing list etiquette statements appear to have been written by sysadmins who loved their command lines, and refuse to change anything. Many still reference rfc1855, written in 1995. Just reading that paginated TXT should give you an idea of the climate at the time. Here's a short, fairly random list of mailing list etiquette statements with some extracted formatting guidelines: Mozilla - HTML discouraged, interleaved posting. FreeBSD - No HTML, don't top post, line length at 75 characters. Fedora - No HTML, bottom-post. You get the idea. You've all seen etiquette statements before. So, assuming that the rules should be obeyed (Usually a good idea), what can be done to allow me to still use a modern mail program, and exchange mail with friends who use the same programs? We like to format our mail. Bold headings, code snippets (sometimes syntax highlighted, if the copy-paste pulls RTF text as from XCOde and Eclipse), free line breaks determined by your browser width, and the (very) occasional image make the message easier to read. Threaded conversations are a wonderful thing. Broadband connections are, I'm sure, the rule for most of the users of SU and of developer mailing lists, disk space is cheap, and so the overhead of HTML is laughable. However, I don't want to post a question to a mailing list and have the guru who can answer my question automatically delete it, or come off as uncaring. Until I hear otherwise, I'll continue to respect the rules as best I can. For a common example of the problem, Gmail, by default, sends HTML formatted messages with bottom-posted quotes (which are folded in, just read the last message immediately above), and uses the frame width to wrap lines, rather than a character count. ASCII can be selected, and quotes can be moved and reversed, but line wraps of quotes don't work, line breaks are tedious to add (and more tedious to read, if they're super small in comparison to the width of the frame). Is there a forwarding, free mail program which can help with this exercise? Should an "RFC1855 mode" lab be written? Or do I have to go to the command line for my mailing lists, and gmail for my other mail?

    Read the article

  • How can I get this code involving unique_ptr to compile?!

    - by Neil G
    #include <vector> #include <memory> using namespace std; class A { public: A(): i(new int) {} A(A const& a) = delete; A(A &&a): i(move(a.i)) {} unique_ptr<int> i; }; class AGroup { public: void AddA(A &&a) { a_.emplace_back(move(a)); } vector<A> a_; }; int main() { AGroup ag; ag.AddA(A()); return 0; } does not compile... (says that unique_ptr's copy constructor is deleted) I tried replacing move with forward. Not sure if I did it right, but it didn't work for me.

    Read the article

  • Global Ignores for SVN?

    - by Michael Stum
    Is there a way to setup a global list of Ignores for a SVN Repository or for the SVN Client on the PC? The only reason I'm using tools like Tortoise/Ankh/VisualSVN is because I want to only check in the files I need without all the bin/obj/Resharper stuff. I'm spoiled by .gitignore and .hgignore which I just copy to a repository and then use "git commit -a" without having to care about checking in junk. I know I can manually set it, but that's tedious to do and I think it had to be applied to every new folder that gets created as well. Using SVN under Windows if that matters

    Read the article

  • WPF Editable Combobox IsFocused problem

    - by Rey
    I am developing a Watermarked Combobox by modifying Combobox Control template. everything is fine when combo box is not in editable mode. but when i change edit mode to True, isFocused property never set to True. this is because in edit mode, combo box is using a Text Box. This is exact copy of this StackOverflow question: . there are no responces to that question. please drop a line if you know how to solve this. or please point me to links that provide Watermark Combobox implementation. Thanks, Rey.

    Read the article

  • How to package .Net framework in Visual Studio project?

    - by raj.tiwari
    I have created a C#/.Net application using visual studio. I have also created an installer project that puts out two files: An MSI file Setup.exe file In my installer project properties I have setup .Net 3.5 as a prerequisite. What I would like my installer to do as as follows: Put out a single file (MSI/exe/whatever) that also includes .Net framework prerequisite The installer should check whether .Net framework is installed on the target machine. If not, it should install it from its own bundled copy. Right now my installer sends people to the web for getting .Net. This is not the user experience I want. Thanks for your help. -Raj

    Read the article

  • Casting a non-generic type to a generic one

    - by John Sheehan
    I've got this class: class Foo { public string Name { get; set; } } And this class class Foo<T> : Foo { public T Data { get; set; } } Here's what I want to do: public Foo<T> GetSome() { Foo foo = GetFoo(); Foo<T> foot = (Foo<T>)foo; foot.Data = GetData<T>(); return foot; } What's the easiest way to convert Foo to Foo<T>? I can't cast directly InvalidCastException) and I don't want to copy each property manually (in my actual use case, there's more than one property) if I don't have to. Is a user-defined type conversion the way to go?

    Read the article

  • Can you explain to me git reset in plain english?

    - by e-satis
    I have seen interesting posts explaining subtleties about git reset. Unfortunately, the more I read about it, the more it appear that I don't understand it fully. I come from a SVN background and git is a whole new paradigm. I got mercurial easily, but git is much more technical. I think git reset is close to hg revert, but it seems there are differences. So what exactly does git reset do? Please include detailed explanations about: the options --hard, --soft and --merge; the strange notation you use with HEAD such as HEAD^ and HEAD~1; concrete use cases and workflows; consequences on the working copy, the HEAD and your global stress level. I will put a bounty on this ASAP cause it's really important and I find the git doc cryptic. Holly blessing and tons of chocolate/beer/name_your_stuff to the guy who makes a no-brainer answer :-)

    Read the article

  • Building a specific piece of Android platform?

    - by Chrisc
    Hi, I have been trying to build only the "/libcore" directory of the Android platform. When I try mmm libcore I end up with the following output: ============================================ PLATFORM_VERSION_CODENAME=REL PLATFORM_VERSION=2.1-update1 TARGET_PRODUCT=generic TARGET_BUILD_VARIANT=eng TARGET_SIMULATOR=false TARGET_BUILD_TYPE=release TARGET_ARCH=arm HOST_ARCH=x86 HOST_OS=linux HOST_BUILD_TYPE=release BUILD_ID=ECLAIR ============================================ make: Entering directory `/home/chris/android/platform' target Prebuilt: (out/target/product/generic/system/etc/security/cacerts.bks) host Prebuilt: run-core-tests-on-ri (out/host/linux-x86/obj/EXECUTABLES/run-core-tests-on-ri_intermediates/run-core-tests-on-ri) target Prebuilt: run-core-tests (out/target/product/generic/obj/EXECUTABLES/run-core-tests_intermediates/run-core-tests) Copy: out/target/product/generic/system/etc/apns-conf.xml Copying: out/target/common/obj/JAVA_LIBRARIES/core_intermediates/classes-full-debug.jar Copying: out/target/common/obj/JAVA_LIBRARIES/core-tests_intermediates/classes-full-debug.jar /bin/bash: jar: command not found make: *** [out/host/common/core-tests.jar] Error 127 make: *** Deleting file `out/host/common/core-tests.jar' make: Leaving directory `/home/chris/android/platform' Does anyone have any suggestions on what Error 127 is, or another method I can go about building "libcore" without having to build the entire platform again? Thanks, Chris

    Read the article

  • Latex: Extracting the sty files of all the used packages

    - by Zlatko
    Hi. So after writhing a large .tex file and using many packages I want to archive everything. Not just the .tex .jpg files but also the .sty files. This is because sometimes some options in the sty files are changed, and then I can't compile the file. The "problem" is that in using Ubuntu, I already installed all the packages in my system. I don't want to have to copy the manually. Is there a program that can do this automatically. Tnx.

    Read the article

  • Jquery getJSON Not Working Cross Site

    - by CJ
    I have a piece of javascript that grabs JSON data. When executed locally everything seems to work fine. However, when I try accessing it from a different site, it doesn't work. Here's the script. $(function(){ var aT = new AjaxTest(); aT.getJson(); }); var AjaxTest = function() { this.ajaxUrl = "http://mydeveloperpage.com/sandbox/ajax_json_test/client_reciever.php"; this.getJson = function(){ $.getJSON(this.ajaxUrl, function(data){ $.each(data, function(i, piece){ alert(piece); }); }); } } You can find a copy of the exact same file at "http://mydeveloperpage.com/sandbox/ajax_json_test/". Any help would be greatly appreciated. Thanks!

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >