Search Results

Search found 30085 results on 1204 pages for 'read only'.

Page 145/1204 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • SMS Receiving using DOTNET C#

    - by sheery
    Hi dears, I have build an application using C# to send and receive sms, my application works fine for sending sms but when i try to read sms from my mobile through my application i get following error "Error: Phone reports generic communication error or syntax error." can any one help me in this matter, my syntax for reading sms is private void btnReadMessages_Click(object sender, System.EventArgs e) { Cursor.Current = Cursors.WaitCursor; string storage = GetMessageStorage(); try { // Read all SMS messages from the storage DecodedShortMessage[] messages = comm.ReadMessages(PhoneMessageStatus.All, storage); foreach(DecodedShortMessage message in messages) { Output(string.Format("Message status = {0}, Location = {1}/{2}", StatusToString(message.Status), message.Storage, message.Index)); ShowMessage(message.Data); Output(""); } Output(string.Format("{0,9} messages read.", messages.Length.ToString())); Output(""); } catch(Exception ex) { ShowException(ex); } Cursor.Current = Cursors.Default; }

    Read the article

  • Django gives "I/O operation on closed file" error when reading from a saved ImageField

    - by Rob Osborne
    I have a model with two image fields, a source image and a thumbnail. When I update the new source image, save it and then try to read the source image to crop/scale it to a thumbnail I get an "I/O operation on closed file" error from PIL. If I update the source image, don't save the source image, and then try to read the source image to crop/scale, I get an "attempting to read from closed file" error from PIL. In both cases the source image is actually saved and available in later request/response loops. If I don't crop/scale in a single request/response loop but instead upload on one page and then crop/scale in another page this all works fine. This seems to be a cached buffer being reused some how, either by PIL or by the Django file storage. Any ideas on how to make an ImageField readable after saving?

    Read the article

  • Reading line for Ubuntu users

    - by Willy Levine
    Normally when I read a book I use a bookmark held horizontally under the current line I'm reading to help me keep my eyes on the right spot. When I read a PDF or other document on my computer I would like to be able to do the same thing, only with a line on the screen controlled by the up and down arrow keys. Any suggestions for an application which would do this? I'm a Ubuntu Linux user.

    Read the article

  • SVN/Tortoise Always Makes Files Readonly

    - by Gav
    Whenever I do a checkout/update on my SVN project using Tortoise the files all get set to read-only. Is there an option to stop this? I have 1 particular project where I need any checkouts/updates to never make files read-only. Thanks

    Read the article

  • Is java.util.Scanner that slow?

    - by Cristian Vrabie
    Hi guys, In a Android application I want to use Scanner class to read a list of floats from a text file (it's a list of vertex coordinates for OpenGL). Exact code is: Scanner in = new Scanner(new BufferedInputStream(getAssets().open("vertexes.off"))); final float[] vertexes = new float[nrVertexes]; for(int i=0;i<nrVertexFloats;i++){ vertexes[i] = in.nextFloat(); } It seems however that this is incredibly slow (it took 30 minutes to read 10,000 floats!) - as tested on the 2.1 emulator. What's going on? I don't remember Scanner to be that slow when I used it on the PC (truth be told I never read more than 100 values before). Or is it something else, like reading from an asset input stream? Thanks for the help!

    Read the article

  • Netlink user-space and kernel-space communication

    - by sasayins
    Hi, I am learning programming in embedded systems using Linux as my main platform. And I want to create a Device Event Management Service. This service is a user-space application/daemon that will detect if a connected hardware module triggered an event. But my problem is I don't know where should I start. I read about Netlink implementation for userspace-kernelspace communication and it seems its a good idea but not sure if it is the best solution. But I read that the UDEV device manager uses Netlink to wait a "uevent" from the kernel space but it is not clear for me how to do that. I read about polling sysfs but it seems it is not a good idea to poll filesystem. What do you think the implementation that should I use in my service? Should I use netlink(hard/no clue how to) or just polling the sysfs(not sure if it works)? Thanks

    Read the article

  • Simple CanCan problem

    - by sscirrus
    I have just started with CanCan and here's a sample of the code: # Ability.rb def initialize(user) user ||= User.new can :read, Link end # view.html.erb <% if can? :read, @link %> ... <% end %> This is from the github repo for CanCan but this doesn't seem to work (it returns false and stops the ... code from running). When I change the view to <% if can? :read, Link %>, it works. But, this is different to the CanCan readme. Do you know where I'm going wrong here?

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

  • How should I interpret the specifications of a SSD?

    - by paulgreg
    When considering to buy a SSD, how should I interpret the different specifications of the SSD? Here are some specific things that need to be deciphered: Controller (this can affect performance and endurance more than all other factors combined) Bus Technology Form Factor (Physical Size) Capacity NAND or NOR technology Power Consumption during Read, during Write, when Idle Read/Write Burst and Sustained Throughput All of these things I would like to be explained in more detail and their actual importance in selecting an SSD.

    Read the article

  • Perl, treat string as binary byte array

    - by Mike
    In Perl, is it appropriate to use a string as a byte array containing 8-bit data? All the documentation I can find on this subject focuses on 7-bit strings. For instance, if I read some data from a binary file into $data my $data; open FILE, "<", $filepath; binmode FILE; read FILE $data 1024; and I want to get the first byte out, is substr($data,1,1) appropriate? (again, assuming it is 8-bit data) I come from a mostly C background, and I am used to passing a char pointer to a read() function. My problem might be that I don't understand what the underlying representation of a string is in Perl.

    Read the article

  • Linux software Raid 10 no superblock

    - by Shoshomiga
    I have a software raid 10 with 6 x 2tb hard drives (raid 1 for /boot), ubuntu 10.04 is the os. I had a raid controller failure that put 2 drives out of sync, crashed the system and initially the os didnt boot up and went into initramfs instead, saying that drives were busy but I eventually managed to bring the raid up by stopping and assembling the drives. The os booted up and said that there were filesystem errors, I chose to ignore because it would remount the fs in read-only mode if there was a problem. Everything seemed to be working fine and the 2 drives started to rebuild, I was sure that it was a sata controller failure because I had dma errors in my log files. The os crashed soon after that with ext errors. Now its not bringing up the raid, it says that there is no superblock on /dev/sda2. I tried to reassemble manually with all the device names but it still would not bring up the raid 10 complaining about the missing superblock on sda2, and sda1 was also dropped from the raid 1. When I did examine on the raid10 it says that 1 of the initially failed drives is a spare, the other is spare rebuilding and sda2 is removed. It seems that sda decided to fail right when the system was vulnerable to it because when I boot up a live cd it spews out sda unrecoverable read failures. I have been trying to fix this all week but I'm not sure where to go with this now, I ordered more hard drives because I didn't have a complete backup, but its too late for that now and the only thing I could do is mirror all the hard drives onto the new ones (I'm not sure whether sda was mirrored without errors). On the internet I read that you can recover from this by recreating the array with the same options as when it was made, however because sda is failing I cant use it and I don't want to risk using its mirror instead, so I'm waiting to get another hard drive. I'm also not sure whether to include the out of sync drives or if I can actually use those instead to recover the array. Sorry if this is a mess to read but I've been trying to fix this all day and its late at night now, any thoughts on this would be greatly appreciated. I also did a memtest and changed the motherboard in addition to everything else. EDIT: This is my partition layout Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0009c34a Device Boot Start End Blocks Id System /dev/sdb1 * 2048 511999 254976 83 Linux /dev/sdb2 512000 3904980991 1952234496 83 Linux /dev/sdb3 3904980992 3907028991 1024000 82 Linux swap / Solaris

    Read the article

  • NoSQL or Ehcache caching ?

    - by paddydub
    I'm building a Route Planner Webapp using Spring/Hibernate/Tomcat and a mysql database, I have a database containing read only data, such as Bus Stop Coordinates, Bus times which is never updated. I'm trying to make the app run faster, each time the application is run it will preform approx 1000 reads to the database to calculate a route. I have setup a Ehcache which greatly improves the read from database times. I'm now setting terracotta + Ehcache distributed caching to share the cache with multiple Tomcat JVMs. This seems a bit complicated. I've tried memcached but it was not performing as fast as ehcache. I'm wondering if a MongoDb or Redis would be better suited. I have no experience with nosql but I would appreciate if anyone has any ideas. What i need is quick access to the read only database.

    Read the article

  • App engine downtime

    - by DutrowLLC
    I've noticed that google app engine seems to have a fair amount of downtime where they place the datastore into read-only mode. Frequently this downtime is in the middle of the day. Is this something that is happening only during early development, or is this something that I can expect to be always be occurring? I've developing an application that helps small businesses handle their operations. One thing that it does is take appointments, another is route phone calls. I'd like some suggestions on how to handle times when the datastore is in read-only such as: What if our client is on the phone with the customer and is taking down an appointment and the datastore is in read-only? It would not be acceptable to ask the client to come back later to save, especially if its in the middle of the day. What if there is an incoming call and the application can not store the record or properly route the call due to database writes being unavailable? How are these types of issues normally handled?

    Read the article

  • How to deal with extra space characters while Reading a CSV file?

    - by Ravi Dutt
    I am reading a CSV file with CSV Open Source API. as shown below: Java Code:--> CSVReader reader = new CSVReader(new FileReader(filePath),'\n'); String[] values; if((read=(reader.readNext()))!=null) { values = (read[0].split(" (?=([^\"]*\"[^\"]*\")*[^\"]*$)",-1)).length; } // code ends here When I read this CSV file line by line and split that line with delimiter. Then after spliting values each value I get contains extra space character after each character in String. Suppose value in file is like "ABC" and I got this after reading from CSV file reader as " A B C " . I used removeAll("\s+","") on each value even after it is not working. Thank You in Advance.

    Read the article

  • How does Linux blocking I/O actually work?

    - by tgguy
    In Linux, when you make a blocking i/o call like read or accept, what actually happens? My thoughts: the process get taken out of the run queue, put into a waiting or blocking state on some wait queue. Then when a tcp connection is made (for accept) or the hard drive is ready or something for a file read, a hardware interrupt is raised which lets those processes waiting to wake up and run (in the case of a file read, how does linux know what processes to awaken, as there could be lots of processes waiting on different files?). Or perhaps instead of hardware interrupts, the individual process itself polls to check availability. Not sure, help?

    Read the article

  • Android: Send arbitrary objects within Activities?

    - by Sebastian
    I have read some question here but I didn't find a solution. I have read about Parcelable, Intents, and sharing specific data within Activities from the android dev docs (both dev guide and reference). Here's the scenario: I have one ListActivity that fills in an object parsing an xml file, it shows a list of values, and when clicked I want to return the object that represents the item clicked to the activity that has called it, for then, call another activity with this object. I read on how to implement Parcelable but seems not being the way. Implementing Parcelable receives a Parcel for the constructor and then reads the values from it (or at least that was what I understood). This makes no sense for me and I can't see how to implement basing on that issue. I build the object parsing the xml file, not having a Parcel. I appreciate some clarifications on this, regards.

    Read the article

  • setting a timeout for an InputStreamreader variable

    - by Noona
    I have a server running that accepts connections made through client sockets, I need to read the input from this client socket, now suppose the client opened a connection to my server without sending anything through the server's socket outputstream, in this case, while my server tried to read the input through the client socket's inputstream, an exception will be thrown, but before the exception is thrown i would like a timeout say of 5 sec, how can I do this? currently here's how my code looks like on the server side: try { InputStreamReader clientInputStream = new InputStreamReader(clientSocket.getInputStream()); int c; StringBuffer requestBuffer = new StringBuffer(); while ((c = clientInputStream.read()) != -1) { requestBuffer.append((char) c); if (requestBuffer.toString().endsWith(("\r\n\r\n"))) break; } request = new Request(requestBuffer.toString(), clientSocket); } catch (Exception e) // catch any possible exception in order to keep the thread running { try { if (clientSocket != null) clientSocket.close(); } catch (IOException ex) { ex.printStackTrace(); } System.err.println(e); //e.printStackTrace(); }

    Read the article

  • Contradictory MySqlReader errors

    - by Lazlo
    MySqlCommand command = connection.CreateCommand(); command.CommandText = string.Format("SELECT * FROM characters WHERE account_id = '{0}'", this.ID); MySqlDataReader reader = command.ExecuteReader(); while (filler.Reader.Read()) { ... } I get an error at the last line saying "Invalid attempt to Read when reader is closed." Now, if I add another line before it, as in: MySqlCommand command = connection.CreateCommand(); command.CommandText = string.Format("SELECT * FROM characters WHERE account_id = '{0}'", this.ID); MySqlDataReader reader = command.ExecuteReader(); reader = command.ExecuteReader(); // Here. while (filler.Reader.Read()) { ... } I get an error at that new line saying "There is already an open DataReader associated with this Connection which must be closed first." Alright, I don't want to get picky here, but is my reader open or closed?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • Which twitter client can synchronize unread tweets?

    - by Tom Burger
    Right now I'm forced to read all the tweets in a single client on a single device (TweetDeck on my Android phone). If I would switch to another device and/or client, I would need to search for the last unread tweet, which is sometimes complicated (too many tweets). So, the question: Is there a client who can keep the status (read/unread) on tweets across multiple devices? My target systems would be now Android and MS Windows, but also Linux might be handy.

    Read the article

  • How to skip extra lines before the header of a tab delimited delimited file in R

    - by Michael Dunn
    The software I am using produces log files with a variable number of lines of summary information followed by lots of tab delimited data. I am trying to write a function that will read the data from these log files into a data frame ignoring the summary information. The summary information never contains a tab, so the following function works: read.parameters <- function(file.name, ...){ lines <- scan("tmp.log", what="character", sep="\n") first.line <- min(grep("\\t", lines)) return(read.delim(file.name, skip=first.line-1, ...)) } However, these logfiles are quite big, and so reading the file twice is very slow. Surely there is a better way?

    Read the article

  • Squid Log Rotation and Sarg

    - by beakersoft
    We have just setup squid as our proxy, and i was going to use Sarg to analyze the log files. I had initially set the Squid logs to rotate everyday so they dont get huge. The problem is i cant see an option in the squid config to read a folder full of squid log files (say *.log). Is there an easy way to do this or am i going to have to write a bash script or something to process them all into one before i get squid to read it? Cheers Luke

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >