Search Results

Search found 13068 results on 523 pages for 'copy and paste'.

Page 358/523 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • django media url is not resolved in 500 internal server error template

    - by Tom Tom
    Hi, I'm using a 500.html template for my app, which is an identical copy of the 404.html with some minor text changes. Interestingly the {{ media_url }} context variable will not be resolved by the server if the 500.html is presented (e.g. when I force an internal server error), resulting in a page without any css loaded. An easy way to circumvent this would be to hardcode the links to the css, but I m just curious why the media_url is not resolved. Probably it is because the server encounters a internal server error and that leads to context variables not any more available!?

    Read the article

  • Visual Studio : Make files in a folder got to bin/debug and not bin/debug/folder

    - by CF_Maintainer
    Consider This: I have folder called \SQLCE35Dlls inside my solution. It has some dlls that are required for application to interact with a SQLCE database in a stand alone fashion [without sql server ce 35 install on the PC]. After a build, I want these files to go to bin/debug and not to bin/debug/SQLCE35Dlls/. Setting "Copy if Newer" creates the latter situation. I want the former. Is it possible to facilitate this or does this have to done as part of the installer script? [avoiding the solution of adding the dlls at the root level of the solution instead of inside a folder]. This is a Winforms project solution.

    Read the article

  • Sql Server - INSERT INTO SELECT to avoid duplicates

    - by Ashish Gupta
    I have following two tables:- Table1 ------------- ID Name 1 A 2 B 3 C Table2 -------- ID Name 1 Z I need to insert data from Table1 to Table2 and I can use following sytax for the same:- INSERT INTO Table2(Id, Name) SELECT Id, Name FROM Table1 However, In my case duplicate Ids might exist in Table2 (In my case Its Just "1") and I dont want to copy that again as that would throw an error. I can write something like this:- IF NOT EXISTS(SELECT 1 FROM Table2 WHERE Id=1) INSERT INTO Table2 (Id, name) SELECT Id, name FROM Table1 ELSE INSERT INTO Table2 (Id, name) SELECT Id, name FROM Table1 WHERE Table1.Id<>1 Is there a better way to do this without using IF - ELSE? I want to avoid two INSERT INTO-SELECT statements based on some condition. Any help is appreciated.

    Read the article

  • executing two functions with wshshell

    - by sushant
    i have two different functions (copy and zip) to b executed. can i do it with with a single wshshell script.i tried---- Dim WshShell, oExec,g,h h="D:\d" g="xcopy " & h & " " & "D:\y\ /E & cmd /c cd D:\c & D: & winzip32.exe -min -a D:\a" Set WshShell = CreateObject("WScript.Shell") Set oExec = WshShell.Exec(g) Do While oExec.Status = 0 WScript.Sleep 100 Loop WScript.Echo oExec.Status it dint work.though separate programs i.e g="xcopy " & h & " " & "D:\y\ /E" and g="cmd /c cd D:\d & D: & winzip32.exe -min -a D:\a" works. i am sorry for the formatting problem. any help is appreciated.

    Read the article

  • create table from another table in different database in sql server 2005

    - by Greg
    Hi, I have a database "temp" with table "A". I created new database "temp2". I want to copy table "A" from "temp" to a new table in "temp2" . I tried this statement but it says I have incorrect syntax, here is the statement: CREATE TABLE B IN 'temp2' AS (SELECT * FROM A IN 'temp'); Here is the error: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'IN'. Msg 156, Level 15, State 1, Line 3 Incorrect syntax near the keyword 'IN'. Anyone knows whats the problem? Thanks in advance, Greg

    Read the article

  • sql exception when transferring project from usb to c:\

    - by jello
    I'm working on a C# windows program with Visual Studio 2008. Usually, I work from school, directly on my usb drive. But when I copy the folder on my hard drive at home, an sql exception is unhandled whenever I try to write to the database. it is unhandled at the conn.Open(); line. here's the exception unhandled Database 'L:\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' already exists. Choose a different database name. Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase'. it's weird, because my connection string says |DataDirectory|, so it should work on any drive... here's my connection string: string connStr = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; what's going on here?

    Read the article

  • asp.net search application index update help

    - by srinivasan
    hi im developing a simple search application( ASP.Net VB.Net) the index table is actually a hash table ll be stored in my file system. the search page ll open this in read mode n copy this to a hash table object n perform search. other update n delete functions will open this in write mode n update it. what should i ve to do to make this app correct that there shud not be any execption when multiple user accessing these things at the same time. wat do i ve to do to make this robust and error free. i want multiple users to access search page without any problem n the index updation also in a parallel manner thanks srinivasan

    Read the article

  • JSON URL from StackExchange API returning jibberish?

    - by shsteimer
    I have a feeling I'm doing something wrong here, but I'm not quite sure I'f I'm missing a step, or am just having an encoding problem or something. Here's my code URL url = new URL("http://api.stackoverflow.com/0.8/questions/2886661"); BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream())); // Question q = new Gson().fromJson(in, Question.class); String line; StringBuffer content = new StringBuffer(); while ((line = in.readLine()) != null) { content.append(line); } when I print content, I get a whole bunch of wingdings and special characters, basically jibberish. I would copy and past it here, but that isn't working. What am I doing wrong?

    Read the article

  • Why did my flash drive become "read only" and (how) can I fix it?

    - by Bob
    I have a brand new flash drive (one week old) that has become marked as read only, by Windows, Kubuntu and a bootable partitioner. Why did this happen? Is it fixable? If it is, how can I fix this? The problem Firstly, this drive is new. It's certainly not been used enough to die from normal wear and tear, though I would not discount defective components. The drive itself has somehow become locked in a read only state. Windows' Disk management: Diskpart: Generic Flash Disk USB Device Disk ID: 33FA33FA Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : Yes Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No What really confuses me is Current Read-only State : Yes and Read-only : No. Attempted solutions So far, I've tried: Formatting it in Windows (in Disk management, the format options are greyed out when right clicking). DiskPart Clean (CLEAN - Clear the configuration information, or all information, off the disk.): DISKPART> clean DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. There was nothing in the event log. Windows command line format >format G: Insert new disk for drive G: and press ENTER when ready... The type of the file system is FAT32. Verifying 7740M Cannot format. This volume is write protected. Windows chkdsk: see below for details Kubuntu fsck (through VirtualBox USB passthrough): see below for details Acronis True Image to format, to convert to GPT, to destroy and rebuild MBR, basically anything: failed (could not write to MBR) Details (and a nice story) Background This was a brand new, generic, 8GB flash drive I wanted to create a multiboot flash drive with. It came formatted as FAT32, though oddly a little larger than most 8 GIGAbyte flash drives I've come across. Approximately 127MB was listed as "used" by Windows. I never discovered why. The end usable space was about what I normally expect from a 8GB drive (approx 7.4 GIBIbytes). I had thrown quite a few Linux distros on, along with a copy of Hiren's. They would all boot perfectly. They were put on with YUMI. When I tried to put the Knoppix DVD on, YUMI added an odd video option to its boot comman which caused Knoppix to boot with a black screen on X. ttys 1 through 6 still worked as text only interfaces. A few days later, I took some time to take that odd video option off, making the boot command match the one that comes with Knoppix. On the attempt to boot, Knoppix reported some form of LZMA corruption. Leading up to the current issue I was thinking the Knoppix files may have been corrupted somehow, so I tried reloading it. The drive was nearly full (45MB free), so I deleted a generic ISO that also was not booting. That went fine. I then went through YUMI to 'uninstall' Knoppix, i.e. delete files and remove from the menus. The files went first, then the menus were cleared successfully. However, the free space was stuck at about 700MB, same as it was before removing Knoppix. In the old Knoppix folder, there was a 0 byte file named KNOPPIX that could not be deleted. I tried reinserting the drive to delete this file - without safely removing, if that made a difference (hey, first time for everything). Running the standard Windows chkdsk scan without /r or /f reported errors found. Running with /r just got it stuck. I decided to give fsck a shot, so I loaded up my Kubuntu VM and attached the drive to it with VirtualBox's USB 2.0 passthrough. I umounted it (/dev/sda1) and ran a fsck. There are differences between boot sector and its backup. I chose No action. It told me FATs differ and asked me to select either the first or second FAT. Whichever I selected, I got a notice of Free cluster summary wrong. If I chose Correct, it gave a list of incorrect file names. To try to fix something, at least, I ran it with the -p option. Halfway through fixing the files, the VM froze - I ended its process about ten minutes later. Cause? My next attempt was to use YUMI, again, to rebuild the whole drive. I used YUMI's built in reformat (to FAT32) option and installed a Kubuntu ISO (700MB). The format was successful, however, the extract and copy of Kubuntu (which YUMI uses a 7zip binary for) froze at about 60% done. After waiting for about fifteen minutes (longer than the 3.5GB Knoppix ISO took last time), I pulled the drive out. The drive at this point was already formatted, SYSLINUX already installed, just waiting on the unpacking of an ISO and the modifying of the boot menus. Plugging it back in, it came up as normal - however, any write action would fail. Disk management reported it as read only. On reconnect, it would come up as normal but a write operation would cause it to go read only again. After a few attempts, it started coming up as read only on insertion. Attempts to fix This is when I ran through the attempts listed above, to try and reformat it in case of a faulty format. However the inability to do so even on a bootable disk indicated something more serious is wrong. chkdsk now reports nothing is wrong, and fsck still reports MBR inconsistencies, but now always chooses first FAT automatically after telling me FATs differ. It still does the same Free cluster summary wrong afterwards. I cannot run with -p anymore because it is now marked as read only. It also managed to corrupt my VM's disk somehow on the first attempt (yes, I'm sure I chose sda, which is mapped to a 7.4GB drive - I triple checked). Thank god for snapshots? I'm just about out of ideas. To my inexperienced mind it looks like something in the drive's firmware set it to read only "permanently" somehow - is there any way to reset this? I don't particularly care about keeping data, considering I've reformatted it twice. Also, fixes that keep me in Windows are better; it reduces the risk of me accidentally nuking my main hard drive. Update 1: I pulled apart the drive out of curiosity. As you can see, there are no obvious write protect switches. There is an IC on the other side, ALCOR branded labelled AU6989HL, if that matters. If there appears to be no way to fix this, I'll probably pull out the (glued down) card and put it in a card reader to check if it's the card or the controller that died. Update 2: I've pulled the card off, Windows detects the drive as a card reader now. The contacts on the card don't appear to be used, and there are several rows of holes on the card itself. Putting it into the card reader only detects about 30MB total, RAW. It's probably either the reader incorrectly reporting the card as faulty (as if a real SD card's write protect was switched on) or a bad contact somewhere. If nothing else, I have a spare 8GB Micro SD card now... as soon as I figure out how to format it as 8GB.

    Read the article

  • Set a OGG in raw folder as Ringtone/Notification?

    - by YaW
    Hi, I have some ogg audios in my raw folder and I'm trying to set one of them as a Ringtone (or Notification, Alarm... whatever). I've been looking at the source code of RingDroid and I can see how is this done using the ContentValues and MediaStore, but in all the examples I've seen, the audio files is in the SDCard. Is it possible to set the ringtone directly from the raw folder? If not, how can I make a copy of the raw file to a folder in the SD? Thanks in advance.

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • copying a struct with a struct member to another struct

    - by user1839295
    is the following code correct? typedef struct { int x; int y; } OTHERSTRUCT; struct DATATYPE { char a; OTHERSTRUCT b; } // ... // now we reserve two structs struct DATATYPE structA; struct DATATYPE structB; // ... probably fill insome values // now we copy structA to structB structA = structB; Are both structs now completely identical? Even the "struct in the struct"? Thanks!

    Read the article

  • DataSource for Tomcat web app, Spring and Hibernate

    - by EugeneP
    Web app runs on Tomcat. Datasource is configured with Spring configuration, and is used by Hibernate. If we cannot use JNDI, what would you suggest to use as a DataSource? org.springframework.jdbc.datasource.DriverManagerDataSource will be ok? It's not very good, but sincerely speaking, it can be used on production server, right? Just a bit of headache with too frequent connection reopening. Also, we can use BasicDataSource from Apache. It's much better of course, but here's the question. IF WE DON'T USE JNDI, THEN: If every instance of an app will create its own copy of a DataSource, and every DataSource can have 5 open connections, what do we get? Num_of_running_apps * Num_of_max_active_connections = max active open connection on a DB for this user? Second question: from the perspective of Hibernate, is there any difference about what datasource implementation is used? Will it work with no matter what datasource perfectly and in a stable way?

    Read the article

  • Remove files from Bazaar

    - by Kristopher Ives
    I'm using Bazaar (bzr) to keep source code for a website updated, but we've ran into a problem when we remove files from version control. The files we are removing are ones we never intended to version to begin with. When this happens we use bzr rm --keep to remove the file from version control, but keep the file in the file system. Doing a bzr push or bzr pull results in the removed file(s) being removed on the other branches (other sites that use our code) We need a way to make sure that a bzr push or bzr pull doesn't actually remove those from the working copy. Anyone have any ideas?

    Read the article

  • Using a edit template without using Html.EditorFor()

    - by Mark Nijhof
    I have a date time picker combination in a edit template that can be used like Html.EditorFor(x = x.ETA) but now I want to use the same template somewhere where I don't have a model that contains a DateTime property. So I tried Html.Editor("DateWithTime", "Arrival") which uses the correct template, but doesn't assign a value to ViewData.ModelMetadata.PropertyName which is something that my template relies on. It sets the id of the textbox which is obviously important. Is there a way to render the template and assign a id value to the ViewData.ModelMetadata.PropertyName so I can re-use the logic in the template instead of having to copy it?

    Read the article

  • Idiom vs. pattern

    - by Roger Pate
    In the context of programming, how do idioms differ from patterns? I use the terms interchangeably and normally follow the most popular way I've heard something called, or the way it was called most recently in the current conversation, e.g. "the copy-swap idiom" and "singleton pattern". The best difference I can come up with is code which is meant to be copied almost literally is more often called pattern while code meant to be taken less literally is more often called idiom, but such isn't even always true. This doesn't seem to be more than a stylistic or buzzword difference. Does that match your perception of how the terms are used? Is there a semantic difference?

    Read the article

  • Jquery getJSON Not Working Cross Site

    - by CJ
    I have a piece of javascript that grabs JSON data. When executed locally everything seems to work fine. However, when I try accessing it from a different site, it doesn't work. Here's the script. $(function(){ var aT = new AjaxTest(); aT.getJson(); }); var AjaxTest = function() { this.ajaxUrl = "http://mydeveloperpage.com/sandbox/ajax_json_test/client_reciever.php"; this.getJson = function(){ $.getJSON(this.ajaxUrl, function(data){ $.each(data, function(i, piece){ alert(piece); }); }); } } You can find a copy of the exact same file at "http://mydeveloperpage.com/sandbox/ajax_json_test/". Any help would be greatly appreciated. Thanks!

    Read the article

  • Latex: Extracting the sty files of all the used packages

    - by Zlatko
    Hi. So after writhing a large .tex file and using many packages I want to archive everything. Not just the .tex .jpg files but also the .sty files. This is because sometimes some options in the sty files are changed, and then I can't compile the file. The "problem" is that in using Ubuntu, I already installed all the packages in my system. I don't want to have to copy the manually. Is there a program that can do this automatically. Tnx.

    Read the article

  • Casting a non-generic type to a generic one

    - by John Sheehan
    I've got this class: class Foo { public string Name { get; set; } } And this class class Foo<T> : Foo { public T Data { get; set; } } Here's what I want to do: public Foo<T> GetSome() { Foo foo = GetFoo(); Foo<T> foot = (Foo<T>)foo; foot.Data = GetData<T>(); return foot; } What's the easiest way to convert Foo to Foo<T>? I can't cast directly InvalidCastException) and I don't want to copy each property manually (in my actual use case, there's more than one property) if I don't have to. Is a user-defined type conversion the way to go?

    Read the article

  • How can I get this code involving unique_ptr to compile?!

    - by Neil G
    #include <vector> #include <memory> using namespace std; class A { public: A(): i(new int) {} A(A const& a) = delete; A(A &&a): i(move(a.i)) {} unique_ptr<int> i; }; class AGroup { public: void AddA(A &&a) { a_.emplace_back(move(a)); } vector<A> a_; }; int main() { AGroup ag; ag.AddA(A()); return 0; } does not compile... (says that unique_ptr's copy constructor is deleted) I tried replacing move with forward. Not sure if I did it right, but it didn't work for me.

    Read the article

  • bjam wih visual studio 2010

    - by ra170
    ok, so I ran into problems with Boost under visual studio 2010, so I decided to rebuild it with bjam: such as: bjam --toolset=msvc-10.0 --build-type=complete After running bjam (successfully?) it created a new directory under boost_1_42_0 called: bin.v2 Inside bin.v2 is directory called: libs. Two issues: 1. there's lot less libs under that new directory (about 13), the old directory libs has 88. Is it supposed to be like that or did something fail? 2. the structure is somewhat different too. What do I do with this exactly? Meaning, do I copy it over to the original libs, delete the old libs, try rebulding it with different flags?

    Read the article

  • Tools to backup an external hard disk

    - by Kaushik Gopal
    Hey people, What's the best method to take an exact copy of my external hard disk? A guru suggested rsync, but I was wondering if there's an easier alternative. I do remember reading somewhere that Acronis also does this. Was looking for your advice on the best option. I'm running Windows. Essentially i have an external HDD which has a lot of stuff synchronized across various pcs. I wish to take a backup of this external Hard disk (ext.HDDs aren't entirely reliable so want to keep a backup of my ext.HDD). Cheers. K

    Read the article

  • How to package .Net framework in Visual Studio project?

    - by raj.tiwari
    I have created a C#/.Net application using visual studio. I have also created an installer project that puts out two files: An MSI file Setup.exe file In my installer project properties I have setup .Net 3.5 as a prerequisite. What I would like my installer to do as as follows: Put out a single file (MSI/exe/whatever) that also includes .Net framework prerequisite The installer should check whether .Net framework is installed on the target machine. If not, it should install it from its own bundled copy. Right now my installer sends people to the web for getting .Net. This is not the user experience I want. Thanks for your help. -Raj

    Read the article

  • Sharing output streams through a JNI interface

    - by Chris Conway
    I am writing a Java application that uses a C++ library through a JNI interface. The C++ library creates objects of type Foo, which are duly passed up through JNI to Java. Suppose the library has an output function void Foo::print(std::ostream &os) and I have a Java OutputStream out. How can I invoke Foo::print from Java so that the output appears on out? Is there any way to coerce the OutputStream to a std::ostream in the JNI layer? Can I capture the output in a buffer the JNI layer and then copy it into out?

    Read the article

  • Linking to an Apache License 2.0 library and distributing with proprietary application

    - by atnakjp
    Hi all, I've read through "Apache License, Version 2.0" but my interpretation was in slightly different to an answer given in a related question so was hoping for some clarification. Supposing I created an application that linked to a library that was licensed under the license in question, my interpretation for doing what's required is: I don't need to do anything special to the application itself because it's considered neither "Work" nor "Derivative Works". When distributing the library alongside the application, I need to include a copy of the license. Any installer that contains the library would be considered "Derivative Works" and therefore I would need to show the attribution notices contained in "NOTICE" (if one exists) in one of its screens. If I were to distribute everything in a zip file instead, I would need to put the same attribution notices in a text file that I distribute alongside the file. Does this sound about right? Cheers,

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >