Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 200/965 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Storage drives is causting system crash

    - by Chad
    I'm running Centos 5.4 with 750GB(ntfs) and 2TB drives for storage. Originally I installed the 750, everything seemed fine and then I installed the 2TB drive with NTFS already partitioned. I noticed when I would copy a lot of videos it would crash (no mouse or response from server) about 20min into it. After doing some troubleshooting I noticed the 750 would also crash when doing the same task so I decided that NTFS may be the problem. I unmounted the 2TB drive and tried to partition and format it using ext2 but when using parted it would crash at this point "writing inode tables". Looking at the dmesg logs I believe this is the error "mtrr: type mismatch for e0000000,10000000 old: write-back new: write-combining". Any idea as to what could be causing this?

    Read the article

  • Microsoft Access vs Native SQL

    - by ktm5124
    Hypothetical: Let's say you are writing complex queries to a database and it is very important that the data you extracted is the correct result set (e.g., that you didn't mess up a JOIN by not using all the correct keys, and all the other things that can go wrong, et cetera). What would you rather use to do this? Would you write the query using Microsoft Access and its Design View, or would you write it in native SQL using a SQL IDE? What is the better professional choice? Thanks in advance your feedback!

    Read the article

  • How to make flyspell bypass some words by context?

    - by manu
    Hi, I use Emacs for writing most of my writings. I write using reStructuredText, and then transform them to LaTeX after some preprocessing since I write my citations á-la LaTeX. This is an excerpt of one of my texts (in Spanish): En \cite[pp.~XXVIII--XXIX]{Crnkovic2002} se brindan algunos riesgos que se pueden asumir con el desarrollo basado en componentes, los This text is processed by some custom scripts that deals with the \cite part so rst2latex can do its job. When I activate flyspell-mode it signals most of the citation keys as spelling errors. How can I tell flyspell not to spellcheck things within \cite commands. Furthermore, how can I combine rst-mode and flyspell, so that rst-mode would keep flyspell from spellchecking the following? reST comments reST code literal reST directive parameters and arguments reST raw directive contents Any ideas?

    Read the article

  • MySQL Locks: order of unblocked threads

    - by teehoo
    I have a MySQL ISAM table being accessed my multiple php instances. Right now I'm using a WRITE lock to serialize access to this table. My question is how do I ensure that the PHP instances get served on a First-Come-First-Serve basis? Or is this the default behaviour? The official MySQL documentation doesn't mention anything about the blocked thread order for threads of the same lock type (ie multiple threads attempting a WRITE LOCK). It only mentions that a WRITER will jump to the front of the waiting queue if READERS are waiting.

    Read the article

  • Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?

    - by Jay
    I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?

    Read the article

  • Indexed key vs indexed separate columns, which one is faster ?

    - by Jerry
    In MYSQL, from a pure performance perspective, if I have a table with large amount of data with 10/1 read/write ratio. is it faster in read/write performance to have 4 search criteria in separate columns and all indexed or have them combined in to one single string acting as a key and store in one indexed column ? e.g. say this table with 5 columns, first name, last name, sex, country and file where the first four columns will ALWAYS be given as a part of search parameters in a search or have a table with two columns, key and file. where the value of key can be john-smith-male-australia ?? I don't quite get the pros and cons. the point I try to stress is the fact that all parameters will be given.in a search.

    Read the article

  • StreamWriter appends random data

    - by void
    Hi I'm seeing odd behaviour using the StreamWriter class writing extra data to a file using this code: public void WriteToCSV(string filename) { StreamWriter streamWriter = null; try { streamWriter = new StreamWriter(filename); Log.Info("Writing CSV report header information ... "); streamWriter.WriteLine("\"{0}\",\"{1}\",\"{2}\",\"{3}\"", ((int)CSVRecordType.Header).ToString("D2", CultureInfo.CurrentCulture), m_InputFilename, m_LoadStartDate, m_LoadEndDate); int recordCount = 0; if (SummarySection) { Log.Info("Writing CSV report summary section ... "); foreach (KeyValuePair<KeyValuePair<LoadStatus, string>, CategoryResult> categoryResult in m_DataLoadResult.DataLoadResults) { streamWriter.WriteLine("\"{0}\",\"{1}\",\"{2}\",\"{3}\"", ((int)CSVRecordType.Summary).ToString("D2", CultureInfo.CurrentCulture), categoryResult.Value.StatusString, categoryResult.Value.Count.ToString(CultureInfo.CurrentCulture), categoryResult.Value.Category); recordCount++; } } Log.Info("Writing CSV report cases section ... "); foreach (KeyValuePair<KeyValuePair<LoadStatus, string>, CategoryResult> categoryResult in m_DataLoadResult.DataLoadResults) { foreach (CaseLoadResult result in categoryResult.Value.CaseLoadResults) { if ((LoadStatus.Success == result.Status && SuccessCases) || (LoadStatus.Warnings == result.Status && WarningCases) || (LoadStatus.Failure == result.Status && FailureCases) || (LoadStatus.NotProcessed == result.Status && NotProcessedCases)) { streamWriter.Write("\"{0}\",\"{1}\",\"{2}\",\"{3}\",\"{4}\"", ((int)CSVRecordType.Result).ToString("D2", CultureInfo.CurrentCulture), result.Status, result.UniqueId, result.Category, result.ClassicReference); if (RawResponse) { streamWriter.Write(",\"{0}\"", result.ResponseXml); } streamWriter.WriteLine(); recordCount++; } } } streamWriter.WriteLine("\"{0}\",\"{1}\"", ((int)CSVRecordType.Count).ToString("D2", CultureInfo.CurrentCulture), recordCount); Log.Info("CSV report written to '{0}'", fileName); } catch (IOException execption) { string errorMessage = string.Format(CultureInfo.CurrentCulture, "Unable to write XML report to '{0}'", fileName); Log.Error(errorMessage); Log.Error(exception.Message); throw new MyException(errorMessage, exception); } finally { if (null != streamWriter) { streamWriter.Close(); } } } The file produced contains a set of records on each line 0 to N, for example: [Record Zero] [Record One] ... [Record N] However the file produced either contains nulls or incomplete records from further up the file appended to the end. For example: [Record Zero] [Record One] ... [Record N] [Lots of nulls] or [Record Zero] [Record One] ... [Record N] [Half complete records] This also happens in separate pieces of code that also use the StreamWriter class. Furthermore, the files produced all have sizes that are multiples of 1024. I've been unable to reproduce this behaviour on any other machine and have tried recreating the environment. Previous versions of the application didn't exhibite this behaviour despite having the same code for the methods in question. EDIT: Added extra code.

    Read the article

  • Hibernate - EhCache - Which region to Cache associations/sets/collections ??

    - by lifeisnotfair
    Hi all, I am a newcomer to hibernate. It would be great if someone could comment over the following query that i have: Say i have a parent class and each parent has multiple children. So the mapping file of parent class would be something like: parent.hbm.xml <hibernate-mapping > <class name="org.demo.parent" table="parent" lazy="true"> <cache usage="read-write" region="org.demo.parent"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <set name="children" lazy="true"> <cache usage="read-write" region="org.demo.parent.children" /> <key column="parent_id"/> <one-to-many class="org.demo.children"/> </set> </class> </hibernate-mapping> children.hbm.xml <hibernate-mapping > <class name="org.demo.children" table="children" lazy="true"> <cache usage="read-write" region="org.demo.children"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <many-to-one name="parent_id" column="parent_id" type="integer" length="10" not-null="true"/> </class> </hibernate-mapping> So for the set children, should we specify the region org.demo.parent.children where it should cache the association or should we use the cache region of org.demo.children where the children would be getting cached. I am using EHCache as the 2nd level cache provider. I tried to search for the answer to this question but couldnt find any answer in this direction. It makes more sense to use org.demo.children but I dont know in which scenarios one should use a separate cache region for associations/sets/collections as in the above case. Kindly provide your inputs also let me know if I am not clear in my question. Thanks all.

    Read the article

  • Problem When Compressing File using vb.net

    - by Amr Elnashar
    I have File Like "Sample.bak" and when I compress it to be "Sample.zip" I lose the file extension inside the zip file I meann when I open the compressed file I find "Sample" without any extension. I use this code : Dim name As String = Path.GetFileName(filePath).Replace(".Bak", "") Dim source() As Byte = System.IO.File.ReadAllBytes(filePath) Dim compressed() As Byte = ConvertToByteArray(source) System.IO.File.WriteAllBytes(destination & name & ".Bak" & ".zip", compressed) Or using this code : Public Sub cmdCompressFile(ByVal FileName As String) 'Stream object that reads file contents Dim streamObj As Stream = New StreamReader(FileName).BaseStream 'Allocate space in buffer according to the length of the file read Dim buffer(streamObj.Length) As Byte 'Fill buffer streamObj.Read(buffer, 0, buffer.Length) streamObj.Close() 'File Stream object used to change the extension of a file Dim compFile As System.IO.FileStream = File.Create(Path.ChangeExtension(FileName, "zip")) 'GZip object that compress the file Dim zipStreamObj As New GZipStream(compFile, CompressionMode.Compress) 'Write to the Stream object from the buffer zipStreamObj.Write(buffer, 0, buffer.Length) zipStreamObj.Close() End Sub please I need to compress the file without loosing file extension inside compressed file. thanks,

    Read the article

  • How to convert from integer to unsigned char in C, given integers larger than 256?

    - by Alf_InPogform
    As part of my CS course I've been given some functions to use. One of these functions takes a pointer to unsigned chars to write some data to a file (I have to use this function, so I can't just make my own purpose built function that works differently BTW). I need to write an array of integers whose values can be up to 4095 using this function (that only takes unsigned chars). However am I right in thinking that an unsigned char can only have a max value of 256 because it is 1 byte long? I therefore need to use 4 unsigned chars for every integer? But casting doesn't seem to work with larger values for the integer. Does anyone have any idea how best to convert an array of integers to unsigned chars?

    Read the article

  • add extra data to response object to render in template

    - by mp0int
    I ned to write a code sniplet that enables to disable connection to some parts of a site. Admin and the mainpage will be displayable, but user section (which uses ajax) will be displayed, but can not be used (vith a transparent div set over the page). Also there is a few pages which will be disabled. my logic is that, i write a middleware, def process_request(self, request): if ayar.tonline_kapali: url_parcalari = request.path.split('/') if url_parcalari[0] not in settings.BAGIMSIZ_URLLER: if not request.is_ajax(): return render_to_response('bakim_modu.html') else: return None that code let me to display a "site closed" message for the urls not in BAGIMSIZ_URLLER (which contains urls that will be accessible) But i do not figure out how can i solve the problem about ajax pages... i need to set a header or something to the response and need to check it in the template.

    Read the article

  • Modifiers in Makefile rule's dependency list

    - by gnu_maker
    The problem is fairly simple. I am trying to write a rule, that given the name of the required file will be able to tailor its dependencies. Let's say I have two programs: calc_foo and calc_bar and they generate a file with output dependent on the parameter. My target would have a name 'target_*_*'; for example, 'target_foo_1' would be generated by running './calc_foo 1'. The question is, how to write a makefile that would generate outputs of the two programs for a range of parameters?

    Read the article

  • Are C++ Reads and Writes of an int atomic

    - by theschmitzer
    I have two threads, one updating an int and one reading it. This value is a statistic where the order of the read and write is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of value = ox0000FFFF increment value to 0x00010000 Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this is I've always synchronized these types of accesses, but was curious what the community thought.

    Read the article

  • What the best approach to iterate and "store" files over a directory in C (Linux) ?

    - by Andrei Ciobanu
    I have written a function that checks if to files are duplicates or not. This function signature is: int check_dup_memmap(char *f1_name, char *f2_name) It returns: (-1) - If something went wrong; (0) - If the two files are similar; (+1) - If the two files are different; The next step is to write a function that iterates through all the files in a certain directory,apply the previous function, and gives a report on every existing duplicates. Initially I've thought to write a function that generates a file with all the filenames in a certain directory and then, read that file again and gain and compare every two files. Here is that version of the function, that gets all the filenames in a certain directory. void *build_dir_tree(char *dirname, FILE *f) { DIR *cdir = NULL; struct dirent *ent = NULL; struct stat buf; if(f == NULL){ fprintf(stderr, "NULL file submitted. [build_dir_tree].\n"); exit(-1); } if(dirname == NULL){ fprintf(stderr, "NULL dirname submitted. [build_dir_tree].\n"); exit(-1); } if((cdir = opendir(dirname)) == NULL){ char emsg[MFILE_LEN]; sprintf(emsg, "Cannot open dir: %s [build_dir_tree]\t",dirname); perror(emsg); } chdir(dirname); while ((ent = readdir(cdir)) != NULL) { lstat(ent->d_name, &buf); if (S_ISDIR(buf.st_mode)) { if (strcmp(".", ent->d_name) == 0 || strcmp("..", ent->d_name) == 0) { continue; } build_dir_tree(ent->d_name, f); } else{ fprintf(f, "/%s/%s\n",util_get_cwd(),ent->d_name); } } chdir(".."); closedir(cdir); } Still I consider this approach a little inefficient, as I have to parse the file again and again. In your opinion what are other approaches should I follow: Write a datastructure and hold the files instead of writing them in the file ? I think for a directory with a lot of files, the memory will become very fragmented. Hold all the filenames in auto-expanding array, so that I can easy access every file by their index, because they will in a contiguous memory location. Map this file in memory using mmap() ? But mmap may fail, as the file gets to big. Any opinions on this. I want to choose the most efficient path, and access as few resources as possible. This is the requirement of the program... EDIT: Is there a way to get the numbers of files in a certain directory, without iterating through it ?

    Read the article

  • Is the ruby operator ||= intelligent?

    - by brad
    I have a question regarding the ||= statement in ruby and this is of particular interest to me as I'm using it to write to memcache. What I'm wondering is, does ||= check the receiver first to see if it's set before calling that setter, or is it literally an alias to x = x || y This wouldn't really matter in the case of a normal variable but using something like: CACHE[:some_key] ||= "Some String" could possibly do a memcache write which is more expensive than a simple variable set. I couldn't find anything about ||= in the ruby api oddly enough so I haven't been able to answer this myself. Of course I know that: CACHE[:some_key] = "Some String" if CACHE[:some_key].nil? would achieve this, I'm just looking for the most terse syntax.

    Read the article

  • What is the best way to mock a 3rd party object in ruby?

    - by spinlock
    I'm writing a test app using the twitter gem and I'd like to write an integration test but I can't figure out how to mock the objects in the Twitter namespace. Here's the function that I want to test: def build_twitter(omniauth) Twitter.configure do |config| config.consumer_key = TWITTER_KEY config.consumer_secret = TWITTER_SECRET config.oauth_token = omniauth['credentials']['token'] config.oauth_token_secret = omniauth['credentials']['secret'] end client = Twitter::Client.new user = client.current_user self.name = user.name end and here's the rspec test that I'm trying to write: feature 'testing oauth' do before(:each) do @twitter = double("Twitter") @twitter.stub!(:configure).and_return true @client = double("Twitter::Client") @client.stub!(:current_user).and_return(@user) @user = double("Twitter::User") @user.stub!(:name).and_return("Tester") end scenario 'twitter' do visit root_path login_with_oauth page.should have_content("Pages#home") end end But, I'm getting this error: 1) testing oauth twitter Failure/Error: login_with_oauth Twitter::Error::Unauthorized: GET https://api.twitter.com/1/account/verify_credentials.json: 401: Invalid / expired Token # ./app/models/user.rb:40:in `build_twitter' # ./app/models/user.rb:16:in `build_authentication' # ./app/controllers/authentications_controller.rb:47:in `create' # ./spec/support/integration_spec_helper.rb:3:in `login_with_oauth' # ./spec/integration/twit_test.rb:16:in `block (2 levels) in <top (required)>' The mocks above are using rspec but I'm open to trying mocha too. Any help would be greatly appreciated.

    Read the article

  • How to correctly and standardly compare floats?

    - by DIMEDROLL
    Every time I start a new project and when I need to compare some float or double variables I write the code like this one: if (fabs(prev.min[i] - cur->min[i]) < 0.000001 && fabs(prev.max[i] - cur->max[i]) < 0.000001) { continue; } Then I want to get rid of these magic variables 0.000001(and 0.00000000001 for double) and fabs, so I write an inline function and some defines: #define FLOAT_TOL 0.000001 So I wonder if there is any standard way of doing this? May be some standard header file? It would be also nice to have float and double limits(min and max values)

    Read the article

  • Why There is a difference between assembly languages like Windows, Linux ?

    - by mcaaltuntas
    I am relatively new to all this low level stuff,assembly language.. and want to learn more detail. Why there is a difference between Linux, Windows Assembly languages? As I understand when I compile a C code Operating system does not really produce pure machine or assembly code, it produces OS dependent binary code.But why ? For example when I use a x86 system, CPU only understands x86 ASM am I right?.So Why we dont write pure x86 assembly code and why there are different assembly variations based on Operating system? If we would write pure ASM or OS produce pure ASM there wouldn't be binary compatilibty issues between Operating systems or Not ? I am really wondering all reasons behind them. Any detailed answer, article, book would be great. Thanks.

    Read the article

  • String encryption only with numbers?

    - by HH
    Suppose your bank clerk gives you an arbitrary password such as hel34/hjal0@# and you cannot remember it without writing it to a paper. Dilemma: you never write passwords to paper. So you try to invent an encryption, one-to-one map, where you write only a key to a paper, only numbers, and leave the rest junk to your server. Of course, the password can consist of arbitrary things. Implemention should work like hel34/hjal0#@ ---- magic ----> 3442 and to other way: 3442 ---- server magic ---> hel34/hjal0#@ [Update] mvds has the correct idea, to change the base, how would you implement it?

    Read the article

  • How to find the formula of best case and worst case of my algorithm?

    - by rachel7660
    I was given a task. Write an algorithm so that, the input of 2 lists of data, will have at least one in common. So, this is my algorithm: (I write the code in php) $arrayA = array('5', '6', '1', '2', '7'); $arrayB = array('9', '2', '1', '8', '3'); $arrayC = array(); foreach($arrayA as $val){ if(in_array($val, $arrayB)){ array_push($arrayC, $val); } } Thats my own algo, not sure if its a good one. So, based on my algorithm, how to find the formula of best case and worst case (big O)? Note: Please do let me know, if my algorithm is wrong. My goal is " input of 2 lists of data, will have at least one in common."

    Read the article

  • Help with IE bugs when writing JSON via an ASPX response

    - by Jereme
    I have an ASPX page that I am using to write JSON. It works great in Firefox and Chrome, but when I try and use it in IE 8 it gives me an "The XML page cannot be displayed" error instead of allowing jQuery to load the JSON being written by the response. Any ideas? Here is what my code looks like: protected override void OnLoad(EventArgs e) { Response.Clear(); Response.ClearHeaders(); Response.ContentType = "application/json"; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Write(string.Format("[ {{ \"Foo\": \"{0}\", \"bar\": \"{1}\" }} ]", "Foo Content", "Bar Content")); Response.End(); }

    Read the article

  • Rack URL Mapping

    - by Puru puru rin..
    Hi, I am trying to write two kind of Rack routes. Rack allow us to write such routes like so: app = Rack::URLMap.new('/test' => SimpleAdapter.new, '/files' => Rack::File.new('.')) In my case, I would like to handle those routes: "/" or "index" "/*" in order to match any other routes So I had trying this: app = Rack::URLMap.new('/index' => SimpleAdapter.new, '/' => Rack::File.new('./public')) This works well, but... I don't know how to add '/' path (as alternative of '/index' path). The path '/*' is not interpreted as a wildcard, according to my tests. Do you know how I could do? Thanks

    Read the article

  • Upload/Download files with ASP.Net using Visual Studio Web Server

    - by jlnorsworthy
    I need to upload image files and store them on the webserver, then be able to display them in html. I'm currently using AppDomain.CurrentDomain.BaseDirectory to get a path to read and write files. Firefox won't display the image as I just get an img tag with "src='C:...' Is it possible to read/write files like this with Visual Studio web server? I know it'll be possible with IIS, but I don't want to do that if I don't have to (pretty new to web development) I'm implementing this with ASP.Net MVC 2, but I don't think that is relevant.

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >