Search Results

Search found 11553 results on 463 pages for 'disk utility'.

Page 422/463 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • How do I stop the m2eclipse plugin interfering with command line mvn builds?

    - by locka
    I use the m2eclipse plugin in Eclipse so that I can import a Maven project. The plugin reads the pom.xml and sorts out the dependencies in the projects in an Eclipse friendly way so I'm not looking at a sea of broken references and errors. I use Eclipse for code development however I usually build the projects from the command line, e.g. "mvn clean install". Unfortunately when I do this, m2eclipse detects disk activity and attempts to rebuild the workspace. This interferes with the command line build and sometimes results in a race condition. For example the command line might be in its clean phase but fails because it tries to delete a file or directory which is locked during the workspace rebuild. Aside from that workspace rebuilding is incredibly slow, and between failed builds and wasted CPU my build process is 2-3x longer than it should be. It isn't an option to not use Eclipse (e.g. to use Netbeans), or to disable m2eclipse. It is a useful plugin except for this behaviour. So my question is, how do I stop m2eclipse from rebuilding the workspace all the time? Can I invoke a manual refresh and otherwise disable this behaviour?

    Read the article

  • VS2008 - Find and Replace - Searches too many files.

    - by Pam Bullock
    I've used VS2008 a lot and have never had this problem. However, I started a new job and am using a new machine. Ever since I've gotten here the VS Find feature has been acting funny. I first noticed it when I did a replace all for "All Open Files". The project wouldn't build because the values had actually been replaced in other files within the solution that were not open and didn't even open after I pressed replace all. I have found that I can never use replace all on this machine because I never know what it is going to do. Even if I just do a find on "Current Document", once it's done with the document and I should get that message that says "No more matches found" it actually OPENS another random file from my solution where there is a match and keeps on going. It seems to never make any difference what "Look in" option I've chosen. My coworker has an install off the same disk and claims to not be experiencing this. We're in the middle of a stressful, huge project with a close deadline so I know my boss won't let me do a reinstall. Has anyone else ever had this happen? Anyone know a fix?? Thanks, Pam

    Read the article

  • Query Access and VB.NET

    - by yae
    Hi all: I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are vinculated with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obatin the price of the main product, but not runs. This query not returns any value SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5));

    Read the article

  • ASP.NET - I am generating an .XLS file with a DLL, how do I grant permissions for writing to file? (

    - by hamlin11
    I'm generating an .XLS file with a DLL (Excel Library http://code.google.com/p/excellibrary/) I've added this DLL as a reference to my project. The code to save the .XLS to disk is running, but it's encountering a permissions issue. I've attempted to set full access for IUSRS, Network Service, and Everyone just to see if I could get it working, and none of these seems to make a difference. Here's where I'm trying to write the file: c:/temp/test1.xls Here's the error: [SecurityException: Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.] System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) +0 System.Security.CodeAccessPermission.Demand() +54 System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) +2103 System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) +138 System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) +89 System.IO.File.Open(String path, FileMode mode, FileAccess access, FileShare share) +58 ExcelLibrary.Office.CompoundDocumentFormat.CompoundDocument.Create(String file) +88 ExcelLibrary.Office.Excel.Workbook.Save(String file) +73 CHC_Reports.LitAnalysis.CreateSpreadSheet_Click(Object sender, EventArgs e) in C:\Users\brian\Desktop\Enterprise Manager\CHC_Reports\LitAnalysis.aspx.vb:19 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11041511 System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11041050 System.Web.UI.Page.ProcessRequest() +91 System.Web.UI.Page.ProcessRequest(HttpContext context) +240 ASP.litanalysis_aspx.ProcessRequest(HttpContext context) +52 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +599 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +171 Any idea what I need to do to diagnose the permissions issue and allow the file creation? Thanks.

    Read the article

  • How to use an Audio Unit on the iPhone

    - by CodeToaster
    I'm looking for a way to change the pitch of recorded audio as it is saved to disk, or played back (in real time). I understand Audio Units can be used for this. The iPhone offers limited support for Audio Units (for example it's not possible to create/use custom audio units, as far as I can tell), but several out-of-the-box audio units are available, one of which is AUPitch. How exactly would I use an audio unit (specifically AUPitch)? Do you hook it into an audio queue somehow? Is it possible to chain audio units together (for example, to simultaneously add an echo effect and a change in pitch)? EDIT: After inspecting the iPhone SDK headers (I think AudioUnit.h, I'm not in front of a Mac at the moment), I noticed that AUPitch is commented out. So it doesn't look like AUPitch is available on the iPhone after all. weep weep Apple seems to have better organized their iPhone SDK documentation at developer.apple.com of late - now its more difficult to find references to AUPitch, etc. That said, I'm still interested in quality answers on using Audio Units (in general) on the iPhone.

    Read the article

  • Vim, LaTeX, word-wrapping, and version control

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • Is there a good way to QuickCheck Happstack.State methods?

    - by Paul Kuliniewicz
    I have a set of Happstack.State MACID methods that I want to test using QuickCheck, but I'm having trouble figuring out the most elegant way to accomplish that. The problems I'm running into are: The only way to evaluate an Ev monad computation is in the IO monad via query or update. There's no way to create a purely in-memory MACID store; this is by design. Therefore, running things in the IO monad means there are temporary files to clean up after each test. There's no way to initialize a new MACID store except with the initialValue for the state; it can't be generated via Arbitrary unless I expose an access method that replaces the state wholesale. Working around all of the above means writing methods that only use features of MonadReader or MonadState (and running the test inside Reader or State instead of Ev. This means forgoing the use of getRandom or getEventClockTime and the like inside the method definitions. The only options I can see are: Run the methods in a throw-away on-disk MACID store, cleaning up after each test and settling for starting from initialValue each time. Write the methods to have most of the code run in a MonadReader or MonadState (which is more easily testable), and rely on a small amount of non-QuickCheck-able glue around it that calls getRandom or getEventClockTime as necessary. Is there a better solution that I'm overlooking?

    Read the article

  • sem-dynamic cdn

    - by dwi kristianto
    i'm developing couple of websites using php (directory script, etc.) and wordpress as cms. i need to improve its performance, by using cdn for static files (css, js, images). the problem is, css and javascript files are generated on the fly. i did that due to yahoo and some expert advice to combine the files into one file. also changing basic color of css files. for the time being, i use couple of small vps but still its not fast enough. i already contact maxcdn and the support guy said that they dont have such kind of services. what i need is: a cdn that will serve the request from user/visitor and there's no file in local disk, the cdn will redirect/fetch it from another domain/server. in vps, it could be done easily using combination of .htaccess and php, but NOT in the cdn. most of cdn only support purely static files. is there any such cdn that will server semi-dynamic files?

    Read the article

  • error detection/correction/recovery in serial protocols

    - by Jason S
    I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere. So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome. In particular I have to deal with issues like parsing a stream of bytes into packets verifying a packet is correct (easy with a CRC, for instance) identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different) error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?) how to make variable-length packets robust to error correction / recovery. Any suggestions?

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • % style macros not supported in some C++/CLI project property pages under VS2010?

    - by Dave Foster
    We're currently evaluating VS2010 and have upgraded our VS2008 C++/CLI project to the new .vcxproj format. I've noticed that a certain property we had set in the project settings did not get translated properly. Under Configuration Properties - Managed Resources - Resource Logical Name, we used to have (in VS2008) the setting: $(IntDir)\$(RootNamespace).$(InputName).resources which indicated that all .resx files were to compile into OurLib.SomeForm.resources inside of the assembly. (the Debug portion is dropped when assembled) According to MSDN, the $(InputName) macro no longer exists and should be replaced with %(Filename). However, when translating the above line to swap those macros, it does not seem to ever expand. The second .resx file it tries to compile, I get a "LINK : fatal error LNK1316: duplicate managed resource name 'Debug\OurLib.%(Filename).resources". This indicates to me that the % style macros are not being expanded here, at least in this specific property. If we don't set anything in that property, the default behavior seems to be to add the subdirectory as a prefix, such as: OurLib.Forms.SomeForm.resources where Forms is the subdir of our project that the .resx file lives. This only occurs when the .resx file is in an immediate subdirectory of the project being built. If a .resx file exists somewhere else on disk (aka ..\OtherLib\Forms\SomeForm2.resx) this prefix is NOT added. This is causing an issue with loading form resources, as it does not account for this possible prefix, even though we are using the standard Forms Designer method of getting at resources: System::ComponentModel::ComponentResourceManager^ resources = (gcnew System::ComponentModel::ComponentResourceManager(SomeForm::typeid)); and do not specify the .resources file by name. The issue I've just described may not be the same as the original question, but if I were to fix the Resource Logical Name issue I think this would all go away. Does anyone have any information about these % macros and where they are allowed to be used?

    Read the article

  • Pros/cons of reading connection string from physical file vs Application object (ASP.NET)?

    - by HaterTot
    my ASP.NET application reads an xml file to determine which environment it's currently in (e.g. local, development, production). It checks this file every single time it opens a connection to the database, in order to know which connection string to grab from the Application Settings. I'm entering a phase of development where efficiency is becoming a concern. I don't think it's a good idea to have to read a file on a physical disk ever single time I wish to access the database (very often). I was considering storing the connection string in Application["ConnectionString"]. So the code would be public static string GetConnectionString { if (Application["ConnectionString"] == null) { XmlDocument doc = new XmlDocument(); doc.Load(HttpContext.Current.Request.PhysicalApplicationPath + "bin/ServerEnvironment.xml"); XmlElement xe = (XmlElement) xnl[0]; switch (xe.InnerText.ToString().ToLower()) { case "local": connString = Settings.Default.ConnectionStringLocal; break; case "development": connString = Settings.Default.ConnectionStringDevelopment; break; case "production": connString = Settings.Default.ConnectionStringProduction; break; default: throw new Exception("no connection string defined"); } Application["ConnectionString"] = connString; } return Application["ConnectionString"].ToString(); } I didn't design the application so I figure there must have been a reason for reading the xml file every time (to change settings while the application runs?) I have very little concept of the inner workings here. What are the pros and cons? Do you think I'd see a small performance gain by implementing the function above? THANKS

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • XElement vs Dcitionary

    - by user135498
    Hi All, I need advice. I have application that imports 10,000 rows containing name & address from a text file into XElements that are subsequently added to a synchronized queue. When the import is complete the app spawns worker threads that process the XElements by deenqueuing them, making a database call, inserting the database output into the request document and inserting the processed document into an output queue. When all requests have been processed the output queue is written to disk as an XML doc. I used XElements for the requests because I needed the flexibility to add fields to the request during processing. i.e. Depending on the job type the app might require that it add phone number, date of birth or email address to a request based on a name/address match against a public record database. My questions is; The XElements seems to use quite a bit of memory and I know there is a lot of parsing as the document makes its way through the processing methods. I’m considering replacing the XElements with a Dictionary object but I’m skeptical the gain will be worth the effort. In essence it will accomplish the same thing. Thoughts?

    Read the article

  • ASP.NET error on Bitmap.Save "Exception (0x80004005): A generic error occurred in GDI+."

    - by Batu
    Hi, I have a function which first reads an image from disk, resizes it and then saves to another directory. when i use the Bitmap.Save(directory + theimagename) it returns the error as i stated in the question title. i checked the directory is right, and the given image name doesn't exist in that dir. what is weird, is that the same code works great on the local machine. but when i upload it to my shared server. it just doesn't work. the code is below. bmpOut = new Bitmap(Size, Size); Graphics g = Graphics.FromImage(bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; g.FillRectangle(Brushes.White, 0, 0, Size, Size); int topBottomPadding = 0; int leftRightPadding = 0; if (Size > lnNewWidth + 1) leftRightPadding = Convert.ToInt32((Size - lnNewWidth) / 2); else if (Size > lnNewHeight + 1) topBottomPadding = Convert.ToInt32((Size - lnNewHeight) / 2); g.DrawImage(loBMP, leftRightPadding, topBottomPadding, lnNewWidth, lnNewHeight); Bitmap bmp = new Bitmap(bmpOut); if (bmp != null) bmp.Save(ResizedOutput); bmp.Dispose(); bmpOut.Dispose(); g.Dispose(); loBMP.Dispose(); stack trace: [ExternalException (0x80004005): A generic error occurred in GDI+.] System.Drawing.Image.Save(String filename, ImageCodecInfo encoder, EncoderParameters encoderParams) +377630 System.Drawing.Image.Save(String filename, ImageFormat format) +69 System.Drawing.Image.Save(String filename) +25 Utilities.ResizeImage(String fileName, String mode) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Utilities.cs:181 Link.ToProductImage(String fileName) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Link.cs:79 Product.PopulateControls(ProductDetails pd) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:37 Product.Page_Load(Object sender, EventArgs e) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:20

    Read the article

  • Loading from archive?

    - by fuzzygoat
    Can anyone point me in the right direction, I have two sets of load/save methods below that I am playing with to save an array of objects out to disk and then load them back in. I am getting a little confused after calling "saveMoons" how I should go about loading that data back in ... any pointers would be great. -(void)saveMoons:(NSString *)savePath { NSMutableData *data = [[NSMutableData alloc] init]; NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data]; [moons encodeWithCoder:archiver]; [archiver finishEncoding]; [data writeToFile:savePath atomically:YES]; [data release]; [archiver release]; } -(void)loadMoons:(NSString *)loadPath { NSMutableData *data = [[NSMutableData alloc] initWithContentsOfFile:loadPath]; NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; // [self setMoons:[unarchiver ??????]]; } . // ------------------------------------------------------------------- ** // SIMPLER // ------------------------------------------------------------------- ** -(void)saveMoons_SIMPLE:(NSString *)savePath { NSData *data = [NSKeyedArchiver archivedDataWithRootObject:moons]; [data writeToFile:savePath atomically:YES]; } -(void)loadMoons_SIMPLE:(NSString *)loadPath { [self setMoons:[NSKeyedUnarchiver unarchiveObjectWithFile:loadPath]]; } // ------------------------------------------------------------------- ** // // ------------------------------------------------------------------- ** gary

    Read the article

  • Change the content type of a pop up window.

    - by Oscar Reyes
    This question brought a new one: I have a html page and I need it to change the content type when the user press "save" button so the browser prompt to save the file to disk I've been doing this in the server side to offer "excel" versions of the page ( which is basically a html table ) <c:if test="${page.asExcelAction}"> <% response.setContentType("application/vnd.ms-excel"); %> What I'm trying to do now is to do the same, but in the client side with javacript but I can't manage to do it so. This is what I've got so far: <html> <head> <script> function saveAs(){ var sMarkup = document.getElementById('content').innerHTML; //var oNewDoc = document.open('application/vnd.ms-excel'); var oNewDoc = document.open('text/html'); oNewDoc.write( sMarkup ); oNewDoc.close(); } </script> </head> <body> <div id='content'> <table> <tr> <td>Stack</td> <td>Overflow</td> </tr> </table> </div> <input type="button" value="Save as" onClick="saveAs()"/> </body> </html>

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Finding a MIME type for a file on windows

    - by rmeador
    Is there a way to get a file's MIME type using some system call on Windows? I'm writing an IIS extension in C++, so it must be callable from C++, and I do have access to IIS if there is some functionality exposed. Obviously, IIS itself must be able to do this, but my googling has been unable to find out how. I did find this .net related question here on SO, but that doesn't give me much hope (as neither a good solution nor a C++ solution is mentioned there). I need it so I can serve up dynamic files using the appropriate content type from my app. My plan is to first consult a list of MIME types within my app, then fall back to the system's MIME type listing (however that works; obviously it exists since it's how you associate files with programs). I only have a file extension to work with in some cases, but in other cases I may have an actual on-disk file to examine. Since these will not be user-uploaded files, I believe I can trust the extension and I'd prefer an extension-only lookup solution since it seems simpler and faster. Thanks!

    Read the article

  • Cleaning up a sparsebundle with a script

    - by nickg
    I'm using time machine to backup some servers to a sparse disk image bundle and I'd like to have a script to clean up the old back ups and re-size the image once space has been freed up. I'm fairly certain the data is protected because if I delete the old backups by right clicking, I have to enter a password to be able to delete them. To allow my script to be able to delete them, I've been running it as root. For some reason, it just won't run and every file it tries to delete I get rm: /file/: Operation not permitted Here is what I have as my script: #!/bin/bash for server in servername; do /usr/bin/hdiutil attach -mountpoint /path/to/mountpoint /path/to/sparsebundle/$server.sparsebundle/; /bin/sleep 10; /usr/bin/find /path/to/mountpoint -type d -mtime +7 -exec /bin/rm -rf {} \; /usr/bin/hdiutil unmount /path/to/mountpoint; /bin/sleep 10; /usr/bin/hdiutil compact /path/to/sparsebundle/$server.sparsebundle/; done exit; One of the problems I thought was causing this was it needed to have a mountpoint specified since the default mount was to /Volumes/Time\ Machine\ Backups/ that's why I created a mountpoint. I also thought that it was trying to delete the files to quickly after mounting and it wasn't actually mounted yet, that's why I added the sleep. I've also tried using the -delete option for find instead of -exec, but it didn't make any difference. Any help on this would be greatly appreciated because I'm out of ideas as to why this won't work.

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • Java runs out of memory, even though I give it plenty!

    - by spitzanator
    Hey, folks. So, I'm running a java server (specifically Winstone: http://winstone.sourceforge.net/ ) Like this: java -server -Xmx12288M -jar /usr/share/java/winstone-0.9.10.jar --useSavedSessions=false --webappsDir=/var/servlets --commonLibFolder=/usr/share/java This has worked fine in the past, but now it needs to load a bunch more stuff into memory than it has before. The odd part is that, according to 'top', it has 15.0g of VIRT(ual memory) and it's RES(ident set) is 8.4g. Once it hits 8.4g, the CPU hangs at 100% (even though it's loading from disk), and eventually, I get Java's OutOfMemoryError. Presumably, the CPU hanging at 100% is Java doing garbage collection. So, my question is, what gives? I gave it 12 gigs of memory! And it's only using 8.2 gigs before it throws in the towel. What am I doing wrong? Oh, and I'm using java version "1.6.0_07" Java(TM) SE Runtime Environment (build 1.6.0_07-b06) Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) on Linux. Thanks, Matt

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >