Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 156/737 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Looking for an actual experience of RAID 5 2 drive failure?

    - by Brian
    I'm wondering if anyone has any personal experience of RAID 5 2 drive failure with large drives? As I understand it, the theory is that with large 1-2TB drives, if one drive fails in the raid set, it needs to rebuild everything so is thus hitting all the other drives very hard, and the chance of another failure goes up, especially if the drives were from the same manufacturing batch. And if you lose another drive, you lose all the data. This is usually explained after the statement "RAID is not backup" which I agree with. The theory of this makes sense, and I understand it, but does it really happen?

    Read the article

  • Looking for Unix tool/script that, given an input path, will compress every batch of uncompressed 100MB text files into a single gzip file

    - by newToFlume
    I have a dump of thousands of small text files (1-5MB) large, each containing lines of text. I need to "batch" them up, so that each batch is of a fixed size - say 100MB, and compress that batch. Now that batch could be: A single file that is just a 'cat' of the contents of the individual text files, or Just the individual text files themselves Caveats: unix split -b will not work here as I need to keep lines of text intact. Using the lines option is a bit complicated as there is a large variance in the number of bytes in each line. The files need not be a fixed size strictly, as long as it's within 5% of the requested size The lines are critical, and should not be lost: I need to confirm that the input made its way to output without loss - what rolling checksum (something like CRC32, BUT better/"stronger" in face of collisions) A script should do nicely, but this seems like a task someone has done before, and it would be nice to see some code (preferably python or ruby) that does atleast something similar.

    Read the article

  • I cant browse php pages in my local server

    - by tibin mathew
    Hi, I cant browse php pages in my local server.Before it was working fine. But now i cant browse php pages, i can browse html pages and asp pages , no problems with that. But when i try to browse a php page its not loading. What will be the problem?? I am using windows 2000 advanced server and my web server is Tomcat please someone help me Guys i'm not getting anything in my browser, its just continue to loading Nothing showing in that page i'm not getting any 404 error or anything like that. its just continue to be loading for example consider my file is located under insider a folder named as myproject i can reach upto this http://localhost/projects/myproject but after that i cant browse php pages inside that... http://localhost/projects/myproject/index.php this will continue to be loading, and nothing shows in that page

    Read the article

  • What to do with old hard drives?

    - by caliban
    I have over 100+ old hard drives, ranging from 100MB Quantums to 200GB WDs, most of them PATA, some SATA. Most still working. The squirrel mentality runs in my family - hoard everything, discard nothing. Thus, and this is a relevant question - any suggestions on how to put these drives to use (anything) instead of them just being deadweights and space takers around the office? Hopeful objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, or plain fun, or serve a social purpose, or benefit the community. You do not need to limit your answer to only one hard drive - if your project needs all 100++, bring it on! Your answer need not be limited to one project per hard drive - if one hard drive can be used for multiple projects, bring it on! If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a utility, and not just for decoration purposes.

    Read the article

  • bad switchs duplicate my ip

    - by tacoen
    I had a large area LAN. There were many switch and AP on it, then somehow I couldn't ping my servers, and it's said that the IP was duplicated. I use arpwatch and found out that one of the switch flip-flop-ing the IP. I isolated that troublesome switch using his mac-address. But, since this a large area LAN... I doubt this will be the last cases. If there any software or hardware that I can use to prevent this kind of error? Sorry for my bad English.

    Read the article

  • $DISPLAY dependent gtk themes

    - by Vlad Seghete
    I have a computer at home that I log into remotely. The "monitor" for it is a TV, so I want gtk applications to use a large font and icon theme, which I managed to do by editing the ~/.gtkrc-2.0 file and some other similar stuff. What I want to be able to do is have a separate theme for when I'm logging in remotely. The best way to explain is that I would like my gtk theme choice to be dependent on the X display that the application is started on. For example, if I start something on :0.0 then that is the TV and I want large fonts, but if I start it on localhost:10.0 I want to use a regular size font, because it will get rendered on my laptop screen. The elegant solution would be to have some sort of IF statement in the .gtkrc-2.0 file that checks the $DISPLAY variable and behaves accordingly. The problem is I can't find any documentation on control structures in .gktrc files, or if it's even possible to do that.

    Read the article

  • How should a small team using multiple OS's deploy over github?

    - by Toby
    We have a small development team that have recently moved to using github to host our projects. The team consists of three developers, 2 on Windows and 1 on Mac. I am currently researching the best way to deploy applications to our Linux servers (dev and production). Capistrano running locally would be ideal but from what I read this won't work for Windows machines. It looks like the best way is to use a post-receive hook in github, I can see how this would work for auto deploying to dev, but I don't see how we could then deploy to live. I have found paid projects like http://www.deployhq.com/ but it feels like something that a quick bit of code should be able to do for free, I just can't seem to get myself pointed in the right direction! I was wondering what would be considered best practice for small team deployment involving multiple local OS's and github.

    Read the article

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • Is 10% too much for autogrow on a 4 GB sql server DB?

    - by ntsue
    I am getting the following error: 2011-03-07 21:59:35.73 spid64 Autogrow of file 'MYDB_DATA' in database 'MYDB' was cancelled by user or timed out after 16078 milliseconds. Use ALTER DATABASE to set a smaller FILEGROWTH value for this file or to explicitly set a new file size. I did some research, and I found that for large databases you should set autogrow to a fixed size (MB), and not to a percentage. I feel like this database is not large and I may not be addressing the correct issue by changing this value. Does anyone have any opinions? Thank you! EDIT: I should have specified SQL Server 2008 RC2 running on Windows Server 2008

    Read the article

  • Good bitmap fonts with big sizes and unicode support

    - by bitonic
    I really like bitmap fonts for programming/terminal. As far as I know there are two bitmap fonts with good unicode support: Unifont Fixed The problem is that I have a really high resolution screen, and they're both too small. Fixed does include a large size (10x20) but it looks really bad (it's basically always bold, and bold is a different face). Are there any other bitmap fonts with unicode support and large sizes? Terminus is the only font with a decent size but it doesn't have good unicode support. Having good coverage for mathematical symbols would be enough, since that's what I need.

    Read the article

  • Is it safe to remove Per user queued Windows Error Reporting?

    - by Rewinder
    I was cleaning up my laptop hard-disk, running Windows 7, and as part of the process I ran the Disk Cleanup utility. To my surprise I saw 2 items in the list that were quite large (both ~300MB). Per user queued Windows Error Reporting System queued Windows Error Reporting I guess I had never noticed these, because they were never that big. So, what are these items? Any particular reason why they became so large all of a sudden? And finally, is it safe to remove them?

    Read the article

  • Per-user vhost logging

    - by kojiro
    I have a working per-user virtual host configuration with Apache, but I would like each user to have access to the logs for his virtual hosts. Obviously the ErrorLog and CustomLog directives don't accept the wildcard syntax that VirtualDocumentRoot does, but is there a way to achieve logs in each user's directory? <VirtualHost *:80> ServerName *.example.com ServerAdmin [email protected] VirtualDocumentRoot /home/%2/projects/%1 <Directory /home/*/projects/> Options FollowSymlinks Indexes IndexOptions FancyIndexing FoldersFirst AllowOverride All Order Allow,Deny Allow From All Satisfy Any </Directory> Alias /favicon.ico /var/www/default/favicon.ico Alias /robots.txt /var/www/default/robots.txt LogLevel warn # ErrorLog /home/%2/logs/%1.error.log # CustomLog /home/%2/logs/%1.access.log combined </VirtualHost>

    Read the article

  • Aligning Numbered Bullet Points in Word 2007

    - by Frustratedwithbullets
    Hello, I am putting together a very large business manual which incorportaes numbered heading, steps to follow, diagrams, etc. When using the bullet points, they align perfectly as I work through the processes. However when I include a diagram, or something different from the "norm" of text, the alignment changes. I would like all the bullets points to be aligned in the whole document regardless of where they appear in the document. Is there a way to save the settings so that the bullets always appear in the same position? Currently I am having to reset the indents by dragging the tabs on the ruler. This will be a large document, so I don't want to manually adjust the numbered bullets every time. Help would be greatly appreciated. Thanks very much.

    Read the article

  • Shortcut with arguments in Debian

    - by Duncan
    I have a volume on a debian server which contains a large number of images at full resolution in various folders. What I'd like to do is have a separate sort of browse proxy folder which contains lower quality browse copies of these to enable users to access them for viewing over lower speed dial in accounts. I'd ideally like these to be created on the fly using ImageMagick so there isnt the need to store the large number of browse copies full time and worry about keeping them up to date etc The way I'd invisaged this happening is the browse proxy folder containing a duplicate file and folder structure but with symlinks pointing to a script to transform them with the file path as an argument. Except I know this isnt possible with symlinks so am wondering if there's another way of doing this on linux. On windows shortcuts can take arguments and I'm wondering how to do the same on a Linux platform? (or perhaps I'm going about this the wrong way?)

    Read the article

  • GitHub updating repository?

    - by user1804933
    I am trying to setup GitHub on my server and gotten to the point where I am running the command "git push -u origin master". However, a large file was detected and the following error was received: remote: error: GH001: Large files detected. remote: error: Trace: 5520a70fd2eeaa2eafd7de049a590fb5 remote: error: See http://git.io/iEPt8g for more information. remote: error: File app/logs/dev.log is 2041.59 MB; this exceeds GitHub's file size limit of 100 MB I ended up deleting that file and tried adding the git again but I keep running into that error. Any ideas on how to work around this?

    Read the article

  • How to tell Mercurial to never create hard links

    - by scrapdog
    I am planning to use Mercurial in the near future on some projects. These projects will normally reside in a directory on my Windows machine, but I will be sharing these directories using VirtualBox so I can work on them directly from within Linux. I understand that Mercurial will sometimes create hard links when cloning repositories. I'm not sure how a VirtualBox shared directory handles these hard links (or if it even can), so I'd rather just tell Mercurial to never attempt to make hard links and always make a copy. My question: how do I globally disable Mercurial from hard linking? (Although if someone has gotten Mercurial and VirtualBox shared folders to work nicely with hard linking, I'd like to hear about it!)

    Read the article

  • SAN for Medium Business - Where to start? [closed]

    - by Henson
    I've always run Linux on my home computers, and done PC repair for years, but this is my first experience with needing to buy a SAN. I thought I was knowledgeable, but I feel a bit lost. I need to be able to support 25 VMs, which are currently managed through vSphere. The company I'm at is growing quickly though, so I'd like to plan for the future. Ideally, I want a solution that I can just tack arrays onto and manage as one large, iSCSI drive. Suggestions? Good resources? If I can find something that appears to software as one large drive, am I better off going with a solution like FreeNAS or Starwind, or an all-in-one proprietary solution like NetApp? Cost, is (of course, and always I'm sure) an issue.

    Read the article

  • Memory Speeds: 1x4GB or 2x2GB? [closed]

    - by Dasutin
    When it comes to speeds what is faster having one 4GB module in your system or having two 2GB modules. I'm not taking in the fact that the system could have dual channel capabilities. Also what about a server environment? Would it be better to have one large, high density module or break it up into several modules for speed and price? I heard an engineer at my office having a discussion with an employee. He said that its better in all situations to have one large capacity modules instead of breaking it up. It would be cheaper and perform faster. He also said it would take longer for the computer to access what it needed if there were more modules instead of having just one. His explanation didn't seem right to me and I thought I would post this question here to see what other people thought.

    Read the article

  • C# development with Mono and MonoDevelop

    - by developerit
    In the past two years, I have been developing .NET from my MacBook by running Windows XP into VM Ware and more recently into Virtual Box from OS X. This way, I could install Visual Studio and be able to work seamlessly. But, this way of working has a major down side: it kills the battery of my laptop… I can easiely last for 3 hours if I stay in OS X, but can only last 45 min when XP is running. Recently, I gave MonoDevelop a try for developing Developer IT‘s tools and web site. While being way less complete then Visual Studio, it provides essentials tools when it comes to developping software. It works well with solutions and projects files created from Visual Studio, it has Intellisence (word completion), it can compile your code and can even target your .NET app to linux or unix. This tools can save me a lot of time and batteries! Although I could not only work with MonoDevelop, I find it way better than a simple text editor like Smultron. Thanks to Novell, we can now bring Microsoft technology to OS X.

    Read the article

  • Tracking download of non-html (like pdf) downloads with jQuery and Google Analytics

    - by developerit
    Hi folks, it’s been quite calm at Developer IT’s this summer since we were all involved in other projects, but we are slowly comming back. In this post, we will present a simple way of tracking files download with Google Analytics with the help of jQuery. We work for a client that offers a lot of pdf files to download on their web site and wanted to know which one are the most popular. They use Google Analytics for a long time now and we did not want to have a second interface in order to present those stats to our client. So usign IIS logs was not a idea to consider. Since Google already offers us a splendid web interface and a powerful API, we deceided to hook up simple javascript code into the jQuery click event to notify Analytics that a pdf has been requested. (function ($) { function trackLink(e) { var url = $(this).attr('href'); //alert(url); // for debug purpose // old page tracker code pageTracker._trackPageview(url); // you can use the new one too _gaq.push(["_trackPageview",url]); //always return true, in order for the browser to continue its job return true; } // When DOM ready $(function () { // hook up the click event $('.pdf-links a').click(trackLink); }); })(jQuery); You can be more presice or even be sure not to miss one click by changing the selector which hooks up the click event. I have been usign this code to track AJAX requests and it works flawlessly.

    Read the article

  • [ASP.NET 4.0] Persisting Row Selection in Data Controls

    - by HosamKamel
    Data Control Selection Feature In ASP.NET 2.0: ASP.NET Data Controls row selection feature was based on row index (in the current page), this of course produce an issue if you try to select an item in the first page then navigate to the second page without select any record you will find the same row (with the same index) selected in the second page! In the sample application attached: Select the second row in the books GridView. Navigate to second page without doing any selection You will find the second row in the second page selected. Persisting Row Selection: Is a new feature which replace the old selection mechanism which based on row index to be based on the row data key instead. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. Data Control Selection Feature In ASP.NET 3.5 SP1: The Persisting Row Selection was initially supported only in Dynamic Data projects Data Control Selection Feature In ASP.NET 4.0: Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature by setting the EnablePersistedSelection property, as shown below: Important thing to note, once you enable this feature you have to set the DataKeyNames property too because as discussed the full approach is based on the Row Data Key Simple feature but  is a much more natural behavior than the behavior in earlier versions of ASP.NET. Download Demo Project

    Read the article

  • SDL2 sprite batching and texture atlases

    - by jms
    I have been programming a 2D game in C++, using the SDL2 graphics API for rendering. My game concept currently features effects that could result in even tens of thousands of sprites being drawn simultaneously to the screen. I'd like to know what can be done for increasing rendering efficiency if the need arises, preferably using the SDL2 API only. I have previously given a quick look at OpenGL-based 2D rendering, and noticed that SDL2 lacks a command like int SDL_RenderCopyMulti(SDL_Renderer* renderer, SDL_Texture* texture, const SDL_Rect* srcrects, SDL_Rect* dstrects, int count) Which would permit SDL to benefit from two common techniques used for efficient 2D graphics: Texture batching: Sorting sprites by the texture used, and then simultaneously rendering as many sprites that use the same texture as possible, changing only the source area on the texture and the destination area on the render target between sprites. This allows the encapsulation of the whole operation in a single GPU command, reducing the overhead drastically from multiple distinct calls. Texture atlases: Instead of creating one texture for each frame of each animation of each sprite, combining multiple animations and even multiple sprites into a single large texture. This lessens the impact of changing the current texture when switching between sprites, as the correct texture is often ready to be used from the previous draw call. Furthemore the GPU is optimized for handling large textures, in contrast to the many tiny textures typically used for sprites. My question: Would SDL2 still get somewhat faster from any rudimentary sprite sorting or from combining multiple images into one texture thanks to automatic video driver optimizations? If I will encounter performance issues related to 2D rendering in the future, will I be forced to switch to OpenGL for lower level control over the GPU? Edit: Are there any plans to include such functionality in the near future?

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • Releasing Shrinkr – An ASP.NET MVC Url Shrinking Service

    - by kazimanzurrashid
    Few months back, I started blogging on developing a Url Shrinking Service in ASP.NET MVC, but could not complete it due to my engagement with my professional projects. Recently, I was able to manage some time for this project to complete the remaining features that we planned for the initial release. So I am announcing the official release, the source code is hosted in codeplex, you can also see it live in action over here. The features that we have implemented so far: Public: OpenID Login. Base 36 and 62 based Url generation. 301 and 302 Redirect. Custom Alias. Maintaining Generated Urls of User. Url Thumbnail. Spam Detection through Google Safe Browsing. Preview Page (with google warning). REST based API for URL shrinking (json/xml/text). Control Panel: Application Health monitoring. Marking Url as Spam/Safe. Block/Unblock User. Allow/Disallow User API Access. Manage Banned Domains Manage Banned Ip Address. Manage Reserved Alias. Manage Bad Words. Twitter Notification when spam submitted. Behind the scene it is developed with: Entity Framework 4 (Code Only) ASP.NET MVC 2 AspNetMvcExtensibility Telerik Extensions for ASP.NET MVC (yes you can you use it freely in your open source projects) DotNetOpenAuth Elmah Moq xUnit.net jQuery We will be also be releasing  a minor update in few weeks which will contain some of the popular twitter client plug-ins and samples how to use the REST API, we will also try to include the nHibernate + Spark version in that release. In the next release, not sure about the timeline, we will include the Geo-Coding and some rich reporting for both the User and the Administrators. Enjoy!!!

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >