Search Results

Search found 3973 results on 159 pages for 'boost filesystem'.

Page 126/159 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Is a rubber keyboard suitable for heavy use?

    - by Vilx-
    Every keyboard wears out with time, and mine has some age already. The day it fails is coming closer and closer. So I'm slowly starting to look around for a new one. I use the keyboard for gaming and programming, so it gets some pretty solid use. I also tend to eat by the computer, so there's plenty of... uhh... lifeforms down there. Anyway, I was looking at these rubber keyboards. They come pretty cheap (my local computer shop has one for less than $20) and they seem to have some nice properties. They can be easily cleaned, they're quiet, and can be rolled up when needed (plus no worries about spilled drinks). However I'm wondering what their type-ability is. If I can't write on it at a decent speed, the rest of the features don't matter. Not that I'm a fast typer, but being a professional progammer does give a boost to the skill. I couldn't find any reviews on the net so I'm turning to you. Who has used these keyboards and what was your experience? Perhaps there is something else I haven't though of why such a keyboard would not be a good idea?

    Read the article

  • Avoiding thumbnail name collisions with sorl-thumbnail

    - by Owen Nelson
    Understanding that I should probably just dig into the source to come up with a solution, I'm wondering if anyone has come up with a tactic for dealing with this. In my project, I have a lot of images being generated outside of the application. I'm isolating them on the filesystem based on a model's pk. For example, a model instance with a pk of 121 might have the following images: .../thumbs/1/2/1/img.1.jpg .../thumbs/1/2/1/img.2.jpg ... .../thumbs/1/2/1/img.27.jpg Since the image filenames themselves are not guaranteed to be unique, I'm looking for a way to inform sorl (at runtime) that I'd like to prefix thumbs for this model with the instance pk value. Is this even possible without patching sorl?

    Read the article

  • Flash ActionScript 3 runtime SecurityError

    - by dd
    I have swf that loads swf, which loads another swf(video player). Is there a trick in publish settings? everything works fine on my local machine, when I upload it on the sever error happen and video doesnt load SecurityError: Error #2148: SWF file http:// (URL where Site is hosted)/video.swf cannot access local resource file:///Macintosh%20HD/Users/..flash.flv. Only local-with-filesystem and trusted local SWF files may access local resources. at flash.net::NetStream/play() at fl.video::VideoPlayer/http://www.adobe.com/2007/flash/flvplayback/internal::_play() at fl.video::VideoPlayer/http://www.adobe.com/2007/flash/flvplayback/internal::_setUpStream() at fl.video::VideoPlayer/http://www.adobe.com/2007/flash/flvplayback/internal::_load() at fl.video::VideoPlayer/load() at fl.video::FLVPlayback/doContentPathConnect()

    Read the article

  • Core Data store corruption

    - by sehugg
    A handful of customers for my iPhone app are experiencing Core Data store corruption (I assume so, since the error is "Failed to save to data store: Operation could not be completed. (Cocoa error 259.)") Has anyone else experienced this kind of store corruption? I am worried since I aim to soon push an update which performs a schema migration, and I am worried that this will expose even more problems. I had assumed that the Core Data/SQLlite APIs use atomic operations and are immune to corruption except if the underlying filesystem experiences corruption. Is there a way to reduce/prevent corruption, or at least a good way to reproduce (I have been unsuccessful thus far).

    Read the article

  • global scope of variable

    - by shantanuo
    The following shell scrip will check the disk space and change the variable "diskfull" to 1 if the usage is more than 10% The last echo always shows 0 I tried the global diskfull=1 in the if clause but it did not work. How do I change the variable to 1 if the disk consumed is more than 10% #!/bin/sh diskfull=0 ALERT=10 df -HP | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output; do #echo $output usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 ) partition=$(echo $output | awk '{ print $2 }' ) if [ $usep -ge $ALERT ]; then diskfull=1 exit fi done echo $diskfull

    Read the article

  • Language in a Sandbox in Rails

    - by Jon Romero
    I've found that there WAS a sandbox gem (created by the guys that made try ruby in your browser but it was compatible only with Ruby 1.8. Another problem is that I cannot find it anymore (it seems they stop serving the gem from the servers...). So, is there any secure way of running ruby in a sandbox (so you can run it from your browser)? Or an easy way to run (for example lua/python) in a sandbox (no filesystem access, no creation of objects etc) and be called from Ruby (Rails 2.2)? I want to make an application like try_ruby even without having a ruby underneath. But it has to be an easy language (I saw there was a prolog in ruby, even a lisp but I don't think they are easy to learn languages...). So, do you have any suggestions or tips? Or should I just start creating my own DSL in Ruby (if there is a solution in creating a somewhat safe system)? Thx

    Read the article

  • mod_cache not working

    - by Pistos
    I have a PHP site that has many dynamically generated pages. I'm trying to turn to mod_cache to help boost performance, because in most cases, content does not change in a given day. I have configured mod_cache as best I could, following examples around the web, including the mod_cache page on apache.org. When I set LogLevel debug, I see a bit of information about the caching that is [not] happening. There are plenty of pairs of lines like this: [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(141): Adding CACHE_SAVE filter for /foo/bar [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(148): Adding CACHE_REMOVE_URL filter for /foo/bar Which is fine, because I've set CacheEnable disk /foo, to indicate that I want everything under /foo cached. I'm new to mod_cache, but my understanding about these lines is that it just means that mod_cache has acknowledged that the URL is supposed to be cached, but there are supposed to be more lines indicating that it is saving the data to cache, and then later retrieving them on subsequent hits to the same URL. I can hit the same URL till I'm blue in the face, whether with F5 refreshing, or not, or with different browsers, or different computers. It's always that pair of lines that shows in the logs, and nothing else. When I set CacheEnable disk /, then I see more activity. But I don't want to cache the entire site, and there are many, many different subpaths to the site, so I don't want to have to modify code to set no-cache headers in all the necessary places. I'll mention that mod_rewrite is in use here, rewriting /foo/bar to something like index.php?baz=/foo/bar, but my understanding is that mod_cache uses the pre-rewrite URL, not the post-rewrite URL. As far as I can tell, I have the response headers not getting in the way of caching. Here's an example from one hit: Cache-Control:must-revalidate, max-age=3600 Connection:Keep-Alive Content-Encoding:gzip Content-Length:16790 Content-Type:text/html Date:Fri, 01 Jun 2012 21:43:09 GMT Expires:Fri, 1 Jun 2012 18:43:09 -0400 Keep-Alive:timeout=15, max=100 Pragma: Server:Apache Vary:Accept-Encoding mod_cache config is as follows: CacheRoot /var/cache/apache2/ CacheDirLevels 3 CacheDirLength 2 CacheEnable disk /foo What is getting in the way of mod_cache doing its job of caching?

    Read the article

  • SHAddToRecentDocs without a file?

    - by Chris Becke
    I was toying with an IRC client, integrating it with the windows 7 app bar. To get a "Frequent" or "Recent" items list one has to call SHAddToRecentDocs API. I want to add recent IRC channels visited to the Windows 7 Jumplist for the IRC application. Now, my problem is, IRC channels don't exist in the file system. And SHAddToRecentDocs seems to insist on getting some sort of file system object. Ive tried to work around it by creating a IShellItem pointing to my application, and giving it a command line to launch the channel. The shell is rebelling however, and thus far has not visibly added any of my "recent document" attempts to the Jumplist. Is there no way to do this without creating some kind of entirely unwanted filesystem object?

    Read the article

  • Linux ext3 readdir and concurrent updates

    - by Wangnick
    Dear all, we are receiving about 10000 messages per hour. We store them as individual files in hourly directories on an ext3 filesystem. The file name includes a sequence number. We use rsync to mirror these files every 20 seconds at another location (via a SAN, but that doesn't matter). Sometimes an rsync run picks up files n-3, n-2, n-1, n+1, and then next rsync run continues with n, n+2, n+3, n+4 and so on. Is it possible that when one process creates files in a certain sequence within a directory, that another process using readdir() sees the files appearing in a different sequence? Kind regards, Sebastian

    Read the article

  • Error while installing boost_1_54

    - by Farhat
    On trying to install boost I get this error during configuration checks. Googling did not give any pointers. [root@heracles boost_1_54_0]# ./b2 install Performing configuration checks - 32-bit : no (cached) - 64-bit : yes (cached) - arm : no (cached) - mips1 : no (cached) - power : no (cached) - sparc : no (cached) - x86 : yes (cached) error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - has_icu builds : no (cached) warning: Graph library does not contain MPI-based parallel components. note: to enable them, add "using mpi ;" to your user-config.jam - zlib : yes (cached) - iconv (libc) : yes (cached) - icu : no (cached) - icu (lib64) : no (cached) - compiler-supports-ssse3 : yes (cached) - compiler-supports-avx2 : no (cached) - gcc visibility : yes (cached) - long double support : yes (cached) warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - zlib : yes (cached) How can the alternative for allocator sources be located? Thanks.

    Read the article

  • Java resource as file

    - by Martin Riedel
    Is there a way in Java to construct a File instance on a resource retrieved from a jar through the classloader? My application uses some files from the jar (default) or from a filesystem directory specified at runtime (user input). I'm looking for a consistent way of a) loading these files as a stream b) listing the files in the user-defined directory or the directory in the jar respectively Edit: Apparently, the ideal approach would be to stay away from java.io.File altogether. Is there a way to load a directory from the classpath and list its contents (files/entities contained in it)?

    Read the article

  • How to improve INSERT INTO ... SELECT locking behavior

    - by Artem
    In our production database, we ran the following pseudo-code SQL batch query running every hour: INSERT INTO TemporaryTable (SELECT FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true) Now this query itself does not need to be fast, but I noticed it was locking up HighlyContentiousTableInInnoDb, even though it was just reading from it. Which was making some other very simple queries take ~25 seconds (that's how long that other query takes). Then I discovered that InnoDB tables in such a case are actually locked by a SELECT! http://www.mysqlperformanceblog.com/2006/07/12/insert-into-select-performance-with-innodb-tables/ But I don't really like the solution in the article of selecting into an OUTFILE, it seems like a hack (temporary files on filesystem seem sucky). Any other ideas? Is there a way to make a full copy of an InnoDB table without locking it in this way during the copy. Then I could just copy the HighlyContentiousTable to another table and do the query there.

    Read the article

  • Using Spotlight as the "database" of an application

    - by vicvicvic
    I'm developing an OS X application to organize "things" (as iTunes is to music and iPhoto to photos). Instead of having my own database and index, I'm considering using Spotlight to essentially serve this purpose. Has anyone tried this? Is it wise? The main benefit, as I see it, would be simplicity and avoiding redundancy. It seems a bit wasteful to implement my own index machinery when OS X comes with one built in. I have little experience working with Spotlight, however. From a user's perspective, I do know that it has been slow and imprecise in older versions of OS X. I also have a gut-feeling that since it's aimed at searching the whole filesystem, using it for "local" purposes becomes hackish. Obviously, my applications's index needs to constantly be up-to-date. Can mdimport be used for this?

    Read the article

  • Launching Vim via Lua

    - by Keith Pimmel
    I'm writing a simple little Lua commandline app that will build a static website. I'm storing my fragments in a sqlite database. Retrieving the data from the db is straightforward as is saving it; my question comes from editing the data. Is there an elegant way to pipe the data from Lua to vim? Can vim edit a memory buffer and return it? I was planning on launching the editor via os.execute('vim') but only after grabbing a temporary file handle and dumping the database output into that. I would like to have to avoid touching the filesystem that way but that is my contingency plan.

    Read the article

  • Create an assembly in memory

    - by Jared I
    I'd like to create an assembly in memory, using an using the classes in Reflection.Emit Currently, I can create the assembly and get it's bytes using something like AssemblyBuilder builder = AppDomain.CurrentDomain.DefineDynamicAssembly(..., AssemblyBuilderAccess.Save); ... create the assembly ... builder.Save(targetFileName); using(FileStream fs = File.Open(targetFileName, FileMode.Open)) { ... read the bytes from the file stream ... } However, it does so by creating a file on the local filesystem. I don't actually need the file, just the bytes that would be in the file. Is it possible to create the assembly without writing any files?

    Read the article

  • windows I/O manager - IRP's classification in read-like and write-like

    - by clyfe
    I am writing a windows filesystem minifilter driver that must fail IRP's in a preoperation callback. How can I find out from the callback parameters if the operation is read-like ( only reads data ) or it's write-like ( modifies data on the disk - write, delete etc ) ? I'm thinking on: Data->Iopb->TargetFileObject->ReadAccess Data->Iopb->TargetFileObject->WriteAccess But I'm not sure, I think thees are available only in postoperation callback. The documentation is really cumbersome. Code sample: FLT_PREOP_CALLBACK_STATUS Fail ( __inout PFLT_CALLBACK_DATA Data, __in PCFLT_RELATED_OBJECTS FltObjects, __deref_out_opt PVOID *CompletionContext ) { FLT_PREOP_CALLBACK_STATUS status = FLT_PREOP_SUCCESS_NO_CALLBACK; if ( IS WRITE_LIKE(Data, FltObjects) ) { // ??? HOW DO I FIND OUT???? if( FLT_IS_FASTIO_OPERATION(Data) ){ status = FLT_PREOP_DISALLOW_FASTIO; } else { status = FLT_PREOP_COMPLETE; } Data->IoStatus.Status = STATUS_ACCESS_DENIED; Data->IoStatus.Information = 0; return status; } return status; }

    Read the article

  • OpenVPN Server - CPU is pegged out

    - by ericl42
    Hello, I am configuring OpenVPN to act as a SSL tunnel for a remote location. I have OpenVPN1 at our current location acting as a server then OpenVPN2 at the other location that is acting as a client but is also acting as a DHCP server to machines behind it so they are basically connected to the local LAN. Everything is set up fine and I can talk from location A to location B with no problems like everyone is local. I am however having some performance issues. OpenVPN1 CPU is pegged to 100% the entire time I am copying or doing any type of activity through the tunnel. I expect some CPU usage going up but nothing like this. It's really killing my performance. OpenVPN1 is running in ESX right now with 2 gig RAM and 4 procs with unlimited bursting capacity. I am using AES-192 encryption with a 1024 key. Any idea how I can get my CPU down on OpenVPN1 and my download/upload speeds higher between the tunnel? Thanks. edit: Turning down the logging helped boost the throughput a little bit, but I am still fairly shy of where I believe I should be. Also I am still maxed out on the CPU. Does anyone have any ideas? I am really stuck on this. Thanks.

    Read the article

  • synchronized block in JSP tag class

    - by Sudhakar
    Hi, I am trying to find answer for the following ,for the past couple of days ,but couldnt find comprehensive answer Problem Statement I have a custom JSP tag class which handles a web form submission ,captures data and write it to same file in the filesystem. As all web applications,this can be triggeredsimultaneosly ,and i fear that multiple threads would be in action handling each of the submission (we all know thats how Servlet works.) CODE synchronized (this){ final String reportFileName = "testReport.csv"; File reportDir = new File( rootCsDirectory, "reports" ); if(!reportDir.isDirectory())reportDir.mkdir(); File reportFile = new File (reportDir, reportFileName); logReport(reportFile,reportContent.toString()); } ISSUE: - A File object can be opened by one thread for writing and at same time another thread might try to access and fail and throw an exception So i thought of synchronizing (on the object ) should solve the issue , but read some where that jsp engine would have pool of jsp tag objects, so i am afraid that synchronized (this) wont work and it should be changed to synchronized (this.getClass())

    Read the article

  • ASP.NET HTTPHandler not throwing exception when one is expected

    - by josephj1989
    I have an HttpHandler class (implements IHttphandler) where the path defined for the handler in web.config is *.jpg. I am requesting a Jpg image in my page. Within the HTTP Handler I am writing to a file in the filesystem. By mistake I was trying to write to a non existant directory. This should have thrown an exception but the execution simply proceeds.Ofcourse no file is written. But if I give a proper directory the file is written correctly.Is there anything special about HttpHandler Exceptions. See part of the code public void ProcessRequest(HttpContext context){ File.WriteAllLines(context.Request.ApplicationPath+@"\"+"resul.log",new string[]{"Entered JPG Handler"}); If I put a breakpoint on the File.WriteAllLines statement and then step over it I can see an exception occurring.

    Read the article

  • increasing amazon root volume size

    - by OCD
    I have a default amazon ec2 instance with 8GB root volume size. I am running out of space. I have: Detach the current EBS volume in AWS Management Console (Web). Create snapshot of this volume. Created a new Volume with 50G space with my snapshot. Attach the new volume back to the instance to /dev/sda1 However, when I reconnect to the account with: > df -h I can see from the management console that my new Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 8256952 8173624 0 100% / tmpfs 308508 40 308468 1% /dev/shm It's still not using my new volume's size, how to make this work?

    Read the article

  • Query size of block device file in Python

    - by ??O?????
    Hello. I have a Python script that reads a file (typically from optical media) marking the unreadable sectors, to allow a re-attempt to read said unreadable sectors on a different optical reader. I discovered that my script does not work with block devices (e.g. /dev/sr0), in order to create a copy of the contained ISO9660/UDF filesystem, because os.stat().st_size is zero. The algorithm currently needs to know the filesize in advance; I can change that, but the issue (of knowing the block device size) remains, and it's not answered here, so I open this question. I am aware of the following two related SO questions: Determine the size of a block device (/proc/partitions, ioctl through ctypes) how to check file size in python? (about non-special files) Therefore, I'm asking: in Python, how can I get the file size of a block device file?

    Read the article

  • How can I display locally stored images on an internet website?

    - by ropstah
    Hi, i would like to display images on my website that are stored on a visitors local filesystem. Assuming I have the location of the image on the visitors drive (e.g. c:\Documents And Settings\Ropstah\image.png), is it then possible for me to display this image in my internet website (e.g. www.website.com)? The images won't seem to load when i use the following syntax (Internet Explorer 7, Firefox 3 etc..): <img src="file://c:\Documents and Settings\Ropstah\image.png" /> The images DO display if the .html file (which i use on website.com/index.html) is located on my local pc...

    Read the article

  • Unable to load the Starteam Dump into SVN

    - by ssarivis
    Hi, I have a dump created from StarTeam 2008 R 2 (10.4.7.-64) using svn importer 1.1-M8. But when I am trying to import the dump its giving me error: * adding path : tags/Test/GH/13_Environment/Process/Capgemini EN Template - Business Case.doc ...svnadmin: File already exists: filesystem 'help\db', transaction '2-2', path 'tags/Test/GH/13_Environment/Process/Capgemini EN Template - Business Case.doc' I can see from the svn admin load o/p that the file has been added already. May be the dump created by SVN Importer is not correct. Can anyone guide me how to solve this ?

    Read the article

  • Server freezes while installing Redhat Enterprise Linux Server 6

    - by eisaacson
    We've tried both the first options Install or upgrade an existing system Install system with basic video driver When trying option #1, it gets to a screen that has a solid cursor about halfway down, then freezes. When trying option #2, it freezes at the point where it says: Waiting for hardware to initialize... Of course, we bought the unsupported version and haven't found anything to help us so far. Here are the specs to the server in the original post: ASUS P8Z68-M Pro LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard with UEFI BIOS RAIDMAX Reiter ATX-305WBP Black Steel / Plastic ATX Mid Tower Computer Case 450W Power Supply Intel Core i7-2600 Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 2000 BX80623I72600 16GB Ram OCZ Agility 3 SSD 120GB From some of the posts out there could the UEFI Bios or the Sandy Bridge processor be a culprit here? We just tried the DVD on a different computer and it got past that point with ease. It's a standard Dell build compared to our custom machine. Could it be having difficulty recognizing drivers? How do we get past that?

    Read the article

  • Run shell script using fabric and piping script text to shell's stdin

    - by Peter Lyons
    Is there a way to execute a multi-line shell script by piping it to the remote shell's standard input in fabric? Or must I always write it to the remote filesystem, then run it, then delete it? I like sending to stdin as it avoids the temporary file. If there's no fabric API (and it seems like there is not based on my research), presumably I can just use the ssh module directly. Basically I wish fabric.api.run was not limited to a 1-line command that gets passed to the shell as a command line argument, but instead would take a full multi-line script and write it to the remote shell's standard input.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >