Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 924/2399 | < Previous Page | 920 921 922 923 924 925 926 927 928 929 930 931  | Next Page >

  • Bad results converting PDF to EPS on Linux

    - by Tim
    I'm having some trouble converting PDFs (created by Adobe Illustrator on a Mac) to EPS. I have tried several things but I am wondering if there is a better option. The following list is ordered by decreasing quality: inkscape --export-area-page --export-eps=out.eps in.pdf using the graphical program Inkscape works best, but is a bit slow; pdftops -eps in.pdf out.eps uses Poppler and works good and is fast; pdf2ps in.pdf out.eps uses ghostscript and works ok for simple documents; convert in.pdf out.eps uses ImageMagick and always rasterizes the image. I haven't tested the following: acroread -toPostScript use acroread (Linux only) Some issues I've found: Transparency is not supported in EPS, but instead of flattening the layers, most programs rasterize the image producing big files and ugly graphs. Inkscape does this best by only rasterizing the unsupported area. Gradients are rendered properly by Inkscape, but Poppler somehow chops up the gradient into many shapes of different colors. Greek symbols are seemingly not supported by Ghostscript and are rasterized (using pdf2ps). What are your experiences for this kind of task? Did I forgot certain programs and/or command line options that improve quality? I found some posts on this, but not a (thorough) comparison of possibilities, please correct me if I'm wrong. Related posts How to convert PDF to EPS? on TeX

    Read the article

  • Disable inserted lines in multiline TextBox

    - by Shohin
    I have a multiline Textbox for my web page. When user logs in and enters text and press "Save" button, data will be saved. Then, next time when the same user logs in and searches for data, I want him to edit only new text in multiline TextBox, not removing or replacing previously entered text. Is there any way to make multiline TextBox to lock inserted lines or inserted text and allow to only add text? Thanks in advance.

    Read the article

  • What folders to encrypt with EFS on Windows 7 laptop?

    - by Joe Schmoe
    Since I've been using my laptop more as a laptop recently (carrying it around) I am now evaluating my strategy to protect confidential information in case it is stolen. Keep in mind that my laptop is 6 years old (Lenovo T61 with 8 GB or RAM, 2GHz dual core CPU). It runs Windows 7 fine but it is no speedy demon. It doesn't support AES instruction set. I've been using TrueCrypt volume mounted on demand for really important stuff like financial statements forever. Nothing else is encrypted. I just finished my evaluation of EFS, Bitlocker and took a closer look at TrueCrypt again. I've come to conclusion that boot partition encryption via Bitlocker or TrueCrypt is not worth the hassle. I may decide in the future to use Bitlocker or TrueCrypt to encrypt one of the data volumes but at this point I intend to use EFS to encrypt parts of my hard drive that contain data that I wouldn't want exposed. The purpose of this post is to get your feedback about what folders should be encrypted from the general point of view (of course everyone will have something specific in addition) Here is what I thought of so far (will update if I think of something else): 1) AppData\Local\Microsoft\Outlook - Outlook files 2) AppData\Local\Thunderbird\Profiles and AppData\Roaming\Thunderbird\Profiles- Thunderbird profiles, not sure yet where exactly data is stored. 3) AppData\Roaming\Mozilla\Firefox\Profiles\djdsakdjh.default\bookmarkbackups - Firefox bookmark backup. Is there a separate location for "main" Firefox bookmark file? I haven't figured it out yet. 4) Bookmarks for Chrome (don't know where it's bookmarks are) and Internet Explorer ($Username\Favorites) - I don't really use them but why not to secure that as well. 5) Downloads\, My Documents\ and My Pictures\ folders I don't think I need to encrypt, say, latest service pack for Visual Studio. So I will probably create subfolder called "Secure" in all of these folders and set it to "Encrypted". Anything sensitive I will save in this folder. Any other suggestions? Again, this is from the point of view of your "regular office user".

    Read the article

  • using REST webservices as a datasource for Lift ?

    - by Jeff Bowman
    Is there a way to use a webservice (REST in this case) as the data source for a Lift application? I can find a number of tutorials/examples of using Lift to provide the REST API, but in my case the data is hosted elsewhere and exported as a REST webservice. Pointers to doc are greatly appreciated. Thanks, Jeff

    Read the article

  • How to make a huge ram drive?

    - by Brandon Moore
    At my old job when a report was needed I could sit down with someone and pull up results and get immediate feedback, and then refine my queries and ultimately have the data we needed, in the format we needed within 30-90 minutes. I just started working for a new company with a database containing millions of records and I spent my whole 8 hours making a report that I feel I could have made in less than 2 hours if it were not for the massive amount of data the queries are working with, and the fact that I couldn't ask the person needing the data to sit down with me and give me feedback as I pulled up results as I am used to. So I am trying to think of how we can make the server faster... much faster, so that I can have the same level of productivity I'm used to. One thought that just came to mind is that memory is so cheap these days, and by my calculations I could buy 10 8gig ram sticks for 1000 bucks. What I have never heard of though is a device that would let me combine these into a huge ram drive. So I'd like to know if any such device exists, and if not what is the largest ram drive I could realistically make and how would I go about doing so? EDIT: To you guys who are saying the database shema needs to be analyzed... you can't make a query such as "Select f1, f2, f3, etc from SomeTable" run any faster by normalizing or indexing the table. What I'm talking about IS ABSOLUTELY a need for improved performance at the hardware level. I am used to having results come back to me in a few seconds, not a few minutes or much less a half an hour. Maybe that's what you guys are used to who have 100 billion record tables and you feel like that's fast, but I'm looking for results back from tables with about 10 million records to come back to me withing less than half a minute TOPS.

    Read the article

  • Generating a second context menu dynamically in Winforms

    - by rsteckly
    Hi, I have a context menu with a few selections. If the user picks a particular choice, I want a list of choices in a second menu to come up. These choices would be coming from a data store and be unique to that user. I see how in designer you can add a set of choices statically that show upon the user making a selection. However, what do you do when you need that to come from data and not design it in the designer?

    Read the article

  • using jqPrint to print div?

    - by Lohkaeo
    I'm using jqPrint to print data in my page. It is usefull, but when data has 2 page and more. I found problem, jqPrint print first page only(I found problem Firefox and IE8, not fount in Chrome). How to solve this problem? Thank for help Lohkaeo

    Read the article

  • jquery function problem

    - by user295189
    I have this function function onclickRowRecord(recordID) { $.ajax({ type: "POST", url: '/file/to/post.to.php' , data: {recordID:recordID}, success: function(data) { //how to post this to function howToPost(recordID) the recordID } }); } function howToPost(recordID){ alert(recordID); } so how can I get the response from ajax success and post to the other function

    Read the article

  • Password Cracking Windows Accounts

    - by Kevin
    At work we have laptops with encrypted harddrives. Most developers here (on occasion I have been guilty of it too) leave their laptops in hibernate mode when they take them home at night. Obviously, Windows (i.e. there is a program running in the background which does it for windows) must have a method to unencrypt the data on the drive, or it wouldn't be able to access it. That being said, I always thought that leaving a windows machine on in hibernate mode in a non-secure place (not at work on a lock) is a security threat, because someone could take the machine, leave it running, hack the windows accounts and use it to encrypt the data and steal the information. When I got to thinking about how I would go about breaking into the windows system without restarting it, I couldn't figure out if it was possible. I know it is possible to write a program to crack windows passwords once you have access to the appropriate file(s). But is it possible to execute a program from a locked Windows system that would do this? I don't know of a way to do it, but I am not a Windows expert. If so, is there a way to prevent it? I don't want to expose security vulnerabilities about how to do it, so I would ask that someone wouldn't post the necessary steps in details, but if someone could say something like "Yes, it's possible the USB drive allows arbitrary execution," that would be great! EDIT: The idea being with the encryption is that you can't reboot the system, because once you do, the disk encryption on the system requires a login before being able to start windows. With the machine being in hibernate, the system owner has already bypassed the encryption for the attacker, leaving windows as the only line of defense to protect the data.

    Read the article

  • Sorting tree with other column in SQL Server 2008

    - by bodziec
    Hi, I have a table which implements a tree using hierarchyid column Sample data: People \ Girls \1\ Zoey \1\1\ Kate \1\2\ Monica \1\3\ Boys \2\ Mark \2\1\ David \2\2\ This is the order using hierarchyid column as sort column I would like to sort data using hierarchyid but also using name so it would look like this: People \ Boys \2\ David \2\2\ Mark \2\1\ Girls \1\ Kate \1\2\ Monica \1\3\ Zoey \1\1\ Is there a simple solution to do this? Czy da sie to zrobic w jednym zapytaniu sql ?

    Read the article

  • Upgrade TFS 2008 to 2010 on different server

    - by Chen
    Hi, I have been looking for a way to migrate and upgrade our TFS 2008 server to 2010 server preferably without losing any data. I have been looking at the TFS Integration Platform http://tfsintegration.codeplex.com/ and also Visual Studio 2010 TFS Upgrade Guide vs2010upgradeguide.codeplex.com Looking at the document TFS Integration Platform - Migration Guidance.xps using the first link, it seems to suggest that I could preserve all the data by first migrating the TFS 2008 from one server to the other and then upgrade the TFS 2008 to 2010. Is this true? Thank you, Chen

    Read the article

  • What speed are Wi-Fi management and control frames sent at?

    - by Bryce Thomas
    There are a bunch of different 802.11 Wi-Fi standards, e.g. 802.11a, 802.11b, 802.11g, 802.11n etc. that all support different speeds. Wi-Fi frames are generally categorised as one of the following: Data frames - carry the actual application data Control frames - coordinate when its safe to send/reduce collisions Management frames - handle connection discovery/setup/tear down (e.g. AP discovery, association, disassociation) My question is about whether all these frames, and specifically management frames, are transmitted at the fastest supported speed available, or whether certain classes of frames are transmitted at some lowest common denominator speed. I have noticed that when I put an 802.11b/g only device into monitor mode and capture traffic over the air, I still see management frames (e.g. association/disassociation) being transmitted between my phone and AP which are both 802.11n, even though 802.11n has a higher transfer rate. So I am imagining one of two possibilities: My 802.11n phone/AP had to negotiate a slower speed for some reason and that's why I can see their frames on my 802.11b/g monitoring device. Management frames (and perhaps control frames also?) are sent at a lower speed, and it's only data frames that are transmitted faster with newer 802.11 standards. The reason I would like to know which one of these two possibilities (or perhaps a third possibility) is the case is that I want to capture management frames, and need to know whether using an 802.11b/g card is going to lead to me missing some frames sent at higher speeds than the monitoring card can observe. If management frames are indeed sent at a slower rate, then it's all good. If I just happen to be seeing the management frames because my phone/AP have negotiated a slower rate though, then I need to reconsider what card I use for packet capture.

    Read the article

  • Iphone App crashing on launch

    - by Declan Scott
    Hey, My simple iphone app is crashing on launch, it says "the application downloadText quit unexcpectedly i none of these windows that pop up when a mac app crashes and has a send to Apple button. My .h is below and i would greatly appreciate it if anyone could give me a hand as to what's wrong? thanks, Declan `#import "downloadTextViewController.h" @implementation downloadTextViewController // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { NSString *myPath = [self saveFilePath]; NSLog(myPath); BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:myPath]; if (fileExists) { NSArray *values = [[NSArray alloc] initWithContentsOfFile:myPath]; textView.text = [values objectAtIndex:0]; [values release]; } // notification UIApplication *myApp = [UIApplication sharedApplication]; // add yourself to the dispatch table [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(applicationWillTerminate:) name:UIApplicationWillTerminateNotification object:myApp]; [super viewDidLoad]; } (IBAction)fetchData { /// Show activityIndicator / progressView NSURLRequest *downloadRequest = [NSURLRequest requestWithURL:[NSURL URLWithString:@"http://simpsonatyapps.com/exampletext.txt"] cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:1.0]; NSURLConnection *downloadConnection = [[NSURLConnection alloc] initWithRequest:downloadRequest delegate:self]; if (downloadConnection) downloadedData = [[NSMutableData data] retain]; else { /// Error message } } (void)connection:(NSURLConnection *)downloadConnection didReceiveData:(NSData *)data { [downloadedData appendData:data]; NSString *file = [[NSString alloc] initWithData:downloadedData encoding:NSUTF8StringEncoding]; textView.text = file; /// Remove activityIndicator / progressView [[UIApplication sharedApplication] setApplicationIconBadgeNumber:1]; } (NSString *) saveFilePath { NSArray *pathArray = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); return [[pathArray objectAtIndex:0] stringByAppendingPathComponent:@"savedddata.plist"]; } (void)applicationWillTerminate:(UIApplication *)application { NSArray *values = [[NSArray alloc] initWithObjects:textView.text,nil]; [values writeToFile:[self saveFilePath] atomically:YES]; [values release]; } (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } (void)dealloc { [super dealloc]; } (NSCachedURLResponse *)connection:(NSURLConnection *)connection willCacheResponse:(NSCachedURLResponse *)cachedResponse { return nil; } @end `

    Read the article

  • Compilation errors for a c api

    - by sam
    What would be the reason for the following errors though the syntax was right and I have included the coreservices framework in which some data type and constants are declared. " c.c:22: error: syntax error before ‘CFFileDescriptorRef’ c.c:22: warning: no semicolon at end of struct or union c.c:24: error: syntax error before ‘}’ token c.c:24: warning: data definition has no type or storage class lipo: can't figure out the architecture type of: /var/folders/fF/fFgga6+-E48RL+iXKLFmAE+++TI/-Tmp-//ccFzQIAj.out "

    Read the article

  • alternatedocroot

    - by ring bearer
    Using Sun Glassfish Enterprise server v2.1.1 I am using "alternatedocroot" via sun-web.xml for my web application to abstract out static content from actual deploy-able code (EAR/WAR) What I have is a cluster of two server instances distributed across two physical hosts - HOST1 and HOST2. "alternatedocroot" points to /data/static-content/ on both HOST1 and HOST2. Would DAS (Domain application server )take care of syncing /data/static-content between HOST1 and HOST2 if I use syncinstances=true option while starting up the cluster? Thanks!

    Read the article

  • Hard drive not correctly recognized on a new Windows 7 installation, but works correctly on Windows XP

    - by david
    I'm having problems configuring a hard disk in a brand new, clean Windows 7 installation. System specs: Hard disk: WD VelociRaptor WD6000HLHX (600 GB, 10000 RPM) Motherboard: Gigabyte Z77X-UD3H BIOS SATA mode set to AHCI (not RAID), with disk connected to SATA0 (6 Gb/s port). Windows 7 Enterprise SP1 64-bit The disk is recognized by the BIOS and is correctly identified, with the name and size correctly reported. Windows recognizes the disk itself and reports the device is functioning correctly, but it doesn't appear in Explorer. Disk Management shows the drive, but incorrectly states that it is uninitialized and has no partitions. If I try to initialize the drive, I get an error saying that "the system cannot find the file specified" (what file?). Before connecting the drive to the new machine, I partitioned and formatted it under Windows XP SP2, creating 2 partitions (MBR, not GPT) and copying over a boatload of data. However, none of this data appears under Windows 7. If I put the disk back into the Windows XP machine, I can access the disk and all of its data. Is it possible to get Windows 7 to correctly recognize the disk without having to erase it and start over? If so, how do I do so? I checked this question, which seems to cover the same issue, but it didn't help.

    Read the article

  • Windows Bluescreen - atikmpag.sys

    - by Mochan
    Information Name: atikmpag.sys bluescreen (BSOD or BlueScreen of Death) Error code: 0x00000116 Appears when: Playing games, watching videos Can be reproduced: Yes Cause: Graphics Card is the main assumption System Specifications Before we begin - I will inform you of my specifications. OS: Windows 7 x64 Home Edition Model: Dell Inspiron 15R Special Edition (aka Inspiron 7520) (Add 2GB of RAM to the model linked) Hard Drive: 1TB CPU: Intel Quad-Core i7 Sandy Bridge (I think) Processor at 2.10GHz (I think it can be clocked to 3GHz?) RAM: 6GB (I think 1 x 4GB and 1 x 2GB) Display: 15.6" HD (1366x768) Graphics: AMD Radeon HD 7500M 2GB Details So now that you know some basics about my computer, I'll get to the problem. Being an Ubuntu user I hardly use Windows, but occasionally I do. Like to run Skyrim and other games incompatible with Linux and WINE. The new Sims 3 Seasons patch is also now not supported. When playing these two games and other ones, theoretically. I have also heard others saying that while watching HD movies and video series it also happens. While watching the bluescreen as it happens, I see it is the 'atikmpag.sys' error. I have not installed much and nothing significant. I think I have downloaded Skyrim, Firefox and The Sims 3. I haven't done much more... since Ubuntu is definitely the best in comparison! (No hate, just a joke :P). I can reproduce it easily (just by running a game for less than a minute). It is always there each time, but it's never at a specific time or anything. So far I have found that it may be caused by lack of power to the graphics card, or it may be damaged or fried. Since I've had the computer for a mere 4 months (and have had other problems with it also). I have contacted Dell but they are useless beyond belief. Anyone with any information, solutions or details are encouraged to share your knowledge, as it would be immensely appreciated.

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one. [closed]

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • Advice on logic circuits and serial communications

    - by Spencer Ruport
    As far as I understand the serial port so far, transferring data is done over pin 3. As shown here: There are two things that make me uncomfortable about this. The first is that it seems to imply that the two connected devices agree on a signal speed and the second is that even if they are configured to run at the same speed you run into possible synchronization issues... right? Such things can be handled I suppose but it seems like there must be a simpler method. What seems like a better approach to me would be to have one of the serial port pins send a pulse that indicates that the next bit is ready to be stored. So if we're hooking these pins up to a shift register we basically have: (some pulse pin)-clk, tx-d Is this a common practice? Is there some reason not to do this? EDIT Mike shouldn't have deleted his answer. This I2C (2 pin serial) approach seems fairly close to what I did. The serial port doesn't have a clock you're right nobugz but that's basically what I've done. See here: private void SendBytes(byte[] data) { int baudRate = 0; int byteToSend = 0; int bitToSend = 0; byte bitmask = 0; byte[] trigger = new byte[1]; trigger[0] = 0; SerialPort p; try { p = new SerialPort(cmbPorts.Text); } catch { return; } if (!int.TryParse(txtBaudRate.Text, out baudRate)) return; if (baudRate < 100) return; p.BaudRate = baudRate; for (int index = 0; index < data.Length * 8; index++) { byteToSend = (int)(index / 8); bitToSend = index - (byteToSend * 8); bitmask = (byte)System.Math.Pow(2, bitToSend); p.Open(); p.Parity = Parity.Space; p.RtsEnable = (byte)(data[byteToSend] & bitmask) > 0; s = p.BaseStream; s.WriteByte(trigger[0]); p.Close(); } } Before anyone tells me how ugly this is or how I'm destroying my transfer speeds my quick answer is I don't care about that. My point is this seems much much simpler than the method you described in your answer nobugz. And it wouldn't be as ugly if the .Net SerialPort class gave me more control over the pin signals. Are there other serial port APIs that do?

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user's mailbox

    - by user36090
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • Handling form security

    - by Harun Baris Bulut
    So how do you maintain the form security about posting data to different page problem? For instance you have a member and he/she tries to change the personal settings and you redirected member to www.domain.com/member/change/member_id member changed the values and post the data to another page by changing the action with firebug or something else. For instance www.domain.com/member/change/member_id_2 How do you handle this problem without using sessions?

    Read the article

  • File upload progress

    - by Cornelius
    I've been trying to track the progress of a file upload but keep on ending up at dead ends (uploading from a C# application not a webpage). I tried using the WebClient as such: class Program { static volatile bool busy = true; static void Main(string[] args) { WebClient client = new WebClient(); // Add some custom header information client.Credentials = new NetworkCredential("username", "password"); client.UploadProgressChanged += client_UploadProgressChanged; client.UploadFileCompleted += client_UploadFileCompleted; client.UploadFileAsync(new Uri("http://uploaduri/"), "filename"); while (busy) { Thread.Sleep(100); } Console.WriteLine("Done: press enter to exit"); Console.ReadLine(); } static void client_UploadFileCompleted(object sender, UploadFileCompletedEventArgs e) { busy = false; } static void client_UploadProgressChanged(object sender, UploadProgressChangedEventArgs e) { Console.WriteLine("Completed {0} of {1} bytes", e.BytesSent, e.TotalBytesToSend); } } The file does upload and progress is printed out but the progress is much faster than the actual upload and when uploading a large file the progress will reach the maximum within a few seconds but the actual upload takes a few minutes (it is not just waiting on a response, all the data have not yet arrived at the server). So I tried using HttpWebRequest to stream the data instead (I know this is not the exact equivalent of a file upload as it does not produce multipart/form-data content but it does serve to illustrate my problem). I set AllowWriteStreamBuffering to false and set the ContentLength as suggested by this question/answer: class Program { static void Main(string[] args) { FileInfo fileInfo = new FileInfo(args[0]); HttpWebRequest client = (HttpWebRequest)WebRequest.Create(new Uri("http://uploadUri/")); // Add some custom header info client.Credentials = new NetworkCredential("username", "password"); client.AllowWriteStreamBuffering = false; client.ContentLength = fileInfo.Length; client.Method = "POST"; long fileSize = fileInfo.Length; using (FileStream stream = fileInfo.OpenRead()) { using (Stream uploadStream = client.GetRequestStream()) { long totalWritten = 0; byte[] buffer = new byte[3000]; int bytesRead = 0; while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0) { uploadStream.Write(buffer, 0, bytesRead); uploadStream.Flush(); Console.WriteLine("{0} of {1} written", totalWritten += bytesRead, fileSize); } } } Console.WriteLine("Done: press enter to exit"); Console.ReadLine(); } } The request does not start until the entire file have been written to the stream and already shows full progress at the time it starts (I'm using fiddler to verify this). I also tried setting SendChunked to true (with and without setting the ContentLength as well). It seems like the data still gets cached before being sent over the network. Is there something wrong with one of these approaches or is there perhaps another way I can track the progress of file uploads from a windows application?

    Read the article

  • Eclipse: Slow startup time

    - by ct2k7
    Hello, I've got Eclipse 3.6.1 on my MacBook Air (2010), and I'm getting slowish startup times. Well, slow, compared to my Desktop, which is somewhat less powerful and a few years old). The startup generally takes 15 seconds, and of this, 4 is spent just on the Eclipse splash screen, before Eclipse loads anything. No projects are open at startup. Here's a copy of my eclipse.ini. -startup ../../../plugins/org.eclipse.equinox.launcher_1.1.0.v20100507.jar --launcher.library ../../../plugins/org.eclipse.equinox.launcher.cocoa.macosx.x86_64_1.1.1.R36x_v20100810 -showsplash org.eclipse.platform --launcher.XXMaxPermSize 512m --launcher.defaultAction openFile -vmargs -Xms256m -Xmx512m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -Dosgi.requiredJavaVersion=1.6 -Xverify:none -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+StringCache -XX:+UseFastAccessorMethods -XX:+UseLargePages -XX:LargePageSizeInBytes=4m -XX:AllocatePrefetchLines=1 -XX:AllocatePrefetchStyle=1 -Dide.gc=true The problem doesn't seem to be related to plugins - I've disabled the ones which I don't need, and regardless of this configuration or whether all of them are selected on startup, it only takes 1second to load the plugins. I'm running Eclipse 3.6.1 Cocoa x64 build (vanilla) with the Zend Studio plugin. The machine has 4GB RAM, an SSD with over 64% free space, 1.6GHz (4MB L2 Cache). OS is Mac OS X 10.6.6, latest Java available, 1.6. For comparison, my Desktop, an old P4 3GHZ (512K L2 Cache) with a 7200RPM drive, under 40% free space, Eclipse (same config) loads in under 7 seconds, consistently. Note, this one is a Windows machine, with latest Java installed.

    Read the article

  • Problem Upgrading NHibernate SQLite Application to .Net 4.0

    - by Xavin
    I have a WPF Application using Fluent NHibernate 1.0 RTM and System.Data.SQLite 1.0.65 that works fine in .Net 3.5. When I try to upgrade it to .Net 4.0 everything compiles but I get a runtime error where the innermost exception is this: `The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found.` The only change made to the project was switching the Target Framework to 4.0.

    Read the article

< Previous Page | 920 921 922 923 924 925 926 927 928 929 930 931  | Next Page >