Search Results

Search found 8219 results on 329 pages for 'less'.

Page 216/329 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Get Python to raise MemoryError instead of eating all my disk space

    - by asmeurer
    If I run a Python program with a memory leak, I would normally expect the program to eventually die with MemoryError. But instead, what happens is that all the virtual memory is used until my disk runs out of space. I am running Mac OS X 10.8 on a retina MacBook Pro. My computer generally has between 10GB to 20GB free. Mac OS X is smart enough to not die completely when the disk runs out of space (rather, it gives me a dialog letting me force quit my GUI programs). Is there a way to make Python just die when it runs out of real memory, or some reasonable amount of virtual memory? This is what happens on Linux, as far as I can tell. I guess Mac OS X is more generous than Linux with virtual memory (the fact that I have an SSD might be part of this; I don't know just how smart OS X is with this stuff). Maybe there's a way to tell the Mac OS X kernel to never use so much virtual memory that leaves less than, say, 5 GB free on the hard drive?

    Read the article

  • Symbolic directory link shared in domain

    - by Sabre
    We have a file server that is 2008R2 STD, it is a member server in a 2008 AD. I need to relocate some of the files and directories and would like to do it behind the scenes more or less without impacting the users. (Reason for this is that some of the files, due to recent software changes, HAVE to be located locally on one of the workstations, but they can be accessed by other applications remotely.) So symbolic links seem the panacea here, I moved a directory to another network share in the same domain (Windows 7 professional), created a symlink to it in the location it used to be in, named it the same thing, and to the local user it seems almost transparent. I.E. When logged into the desktop of the file server, I can go to the directory, open the link, it leaps to the other share as if it were local, exactly what would be expected. Then I tried it from another client computer (Windows 7 professional as well), went through the normal provisioning of R2R and L2R with fsutil... No joy. What I am getting is an access denied "Logon failure: Unknown username or bad password." using the same account that I log on locally to the file server with (Which happens to be the domain admin) So I cannot believe it is telling the truth, or... I assume it is not passing the credentials I am connecting to the first share all the way through the symlink. The end result is I want users on the domain to browser to share A, inside share A is a mixture of directories/files that reside there, and symlinks to directories/files on the second machine over the network in the same domain. Possible? Or am I misunderstanding how the symlink should work?

    Read the article

  • Mouse click-focus wanders in vmPlayer 3.0 dual-monitor

    - by Gary M. Mugford
    Previously, a WinXPSP3 session running on a WinXPSP3 host computer ran perfectly fine in a dual monitor setup. No issues with vmPlayer 2.x. BEFORE updating to vmPlayer 3, the following problem cropped up. When clicking in a single monitor, you would get exactly what you expected. However, when the display was stretched across two monitors, the clicking would be to the left of the mouse cursor. The farther RIGHT you were, the farther left the click would occur. In other words, if you clicked on the system menu of a window in the upper left of a window on the left monitor, you would get the system menu. Move half a screen to your right and the click would be on an item about a quarter of the way over, rather than where you were clicking. And by going all the way to the far right of the right monitor, you could bring up a right-click menu on the far right of the LEFT monitor. I Hope I have described this properly. It's confusing, even in words. In single monitor mode, everything works perfectly fine. If, instead of using either UltraMon or DisplayFusion, you run a single desktop across both monitors (3200x1600), there are no mousing issues. Unfortunately, having two 1600x1200 monitors, that depth of 1600 makes that hack less than useable. My graphic card won't offer anything resembling 3200x1200. vmPlayer 3.0 did not alleviate the situation. The microsoft mouse drivers are up to date and so are the nVidia card drivers. Any ideas?

    Read the article

  • IDE/PATA high-speed hard drive dock

    - by wfaulk
    I frequently need to access bare drives for backups and need a quick, high-speed way to deal with them. There are a multitude of SATA hard drive docks (for example), but I have a lot of IDE/PATA (hereafter "IDE") drives that I would like to be able to use similarly. There are IDE-to-SATA adapters so you can plug your IDE hard drive into a SATA port, so I don't see any reason why you couldn't use the same technology to have a native dock, yet none seems to exist. Now, I'm aware that 3.5" IDE drives do not have a specification for the layout of the connector, and therefore can't be slapped into a dock the same way a SATA drive could, but 2.5" PATA drives do. In fact, I'm not terribly interested in supporting 3.5" drives. It would be nice, but I deal with them far less frequently than 2.5" drives. Also, I'd very much like for the connection to the computer be faster than USB, preferably eSATA, I don't want to be spending time mounting a drive inside an enclosure, I don't want bare drives lying around with a cable hanging off of them, and I'd prefer a single dock rather than two. What seems like the ideal solution to me would be a regular SATA→eSATA dock and some sort of screwless adapter for IDE drives, but I'm open to any suggestions, regardless of my stated preferences, but which are, in some sort of order of preference: high-speed (faster than USB, at least) holder for drive (not just a cable) no complicated enclosure support for 3.5" IDE drives single dock Updates: Here's a 3.5" IDE to 3.5" SATA docking adapter that could be part of the solution. Weird. I figured that would be the impossible part. I was hoping to find something like this 2.5" to 3.5" SATA chassis that would take a 44-pin IDE drive internally. It looks like the Vantec EZ Swap EX comes awfully close. It has its own bay dock, but it looks like the SATA ports on the back are spaced properly, even if they're not aligned quite properly. Unfortunately, the proper position is at the very edge of the drive, which means that the docks' connectors are at the very edge of their recesses, which means there's no way to fit it in there.

    Read the article

  • Rename a Network Printer win Win 7

    - by Alex
    Seems this is a common question but the answers regularly miss the point. I have a server. Server has some printers connected. Server has all drivers for x32 & x64 OS PLUS ALL DEFAULTS set. Server also manages print queue. I have many workstations, all need to use the printers. All NEED to have drivers print queue and DEFAULTS propagated from server. Now.... When I add the printers on the workstations, I get: "ABC Printer on SERVER123" I need something less long - just "ABC Printer" So how? Tip: Please don't show me how to change the name of your locally installed printer. I know how to do this - I am particularly interested in shared printers that look like "ABC Printer on SERVER123" Tip: Installing the driver with a local port wont cut it because then I loose the server propagated defaults, the driver updates and I need to run around with driver disks/confuse trembling users with hard things like choosing drivers. I am happy for a hack if there is no official way to do this in the group policy.... I tried looking in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Printers on the workstation machines but those are only local printers :( I can see the network printer details on the workstations here: HKEY_USERS[Some GUID]\Printers\Connections But there is nothing obvious like a description string... Can anyone help with this?

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • UPS with a HP Proliant server

    - by Groo
    We placed a EATON Ellipse Max 1500 (900W) as the UPS for our HP Proliant ML350 G6. Upon first power failure (actually we only moved the UPS' input plug to a different socket), server immediatelly turned off, and the Health LED turned red and started blinking. UPS was in operation for about a week before that, with battery fully charged to 100%. Since our server's hot-plug supply is 460W, we are pretty sure we haven't overloaded it, the server was completely idle at that time (no web or win apps running except Windows Server core services). Then we tried to do the same with a different, no-name older PC (Core 2 Duo, 2Gb RAM) with a generic power supply (not sure what the power is) and it continued working when we pulled the plug out. UPS load was less than 15% (measured in the provided Eaton utility). We measured the UPS' output voltage using a smart oscilloscope and the THD of the UPS output waveform turned out to be 40%. Did you have similar experiences? Could this be a faulty UPS? Or a faulty power supply? Or some HP sensors configured to trigger too strictly? I wouldn't like replacing this UPS with the same brand, to get same results. [Edit] I also tried to do this while the server is turned off. While the UPS is working on battery, server will not start - as soon as I press the power button, Health LED starts blinking red.

    Read the article

  • Amazon EC2 instance was not available for few minutes (amazon showed that everything ok)

    - by Salvador Dali
    Few minutes ago my amazon Ec2 instance was unavailable for a few minutes. During this time neither I was able to connect to web-site with http, nor I was able to ssh to it. Also I was not able to connect to my amazon management console for some time (less than amount of unavailability of my instance). When I was able to connect to management console, it was showing me that everything is running smoothly (but I still was not able to connect to instance in any way for a minute or two). During this time I have checked their status page just to see that there is no issues (my instance is in Ireland and there is nothing wrong there today). After that I was able to log in. I checked my logins with last to see that no one except me was logging in. I also looked in apache logs and there was no errors or warnings during this time. Right now when I see my amazon monitor, I see a small spike in CPU in last 15 minutes (but this is from 10% to like 20%) I have no idea what can it be (I have never experienced anything like this before) and therefore I have no idea how scared should I be or what else should I look for. Can anyone give me a hint what my actions should be in such situation?

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • Linux machine can't find its tape drive

    - by Kyle Hodgson
    I have an older HP NetServer LPr with what is apparently a Symbios SCSI card connecting to a Quantum SuperLoader 3 that is DLT based. From time to time, we seem to lose the connection to the autoloader. It's usually due to flaky power, but not totally sure why; sometimes when this happens the Autoloader's LED's are orange and it needs to be power cycled. The annoying workaround currently is to reboot the machine. As it is our production VPN and DNS server in addition to being our backup server, this is less than optimal. In Debian (Sarge) is there not some command one can type to get the card to notice that it has the autoloader connected again? dcr1:/proc# grep -i symbios /proc/pci SCSI storage controller: LSI Logic / Symbios Logic 53c895 (rev 1). dcr1:/proc# uname -a Linux dcr1 2.4.27-3-686 #1 Tue Dec 5 21:03:54 UTC 2006 i686 GNU/Linux dcr1:/proc# mt status mt: /dev/tape: No such device dcr1:/proc# ls -l /dev/tape lrwxrwxrwx 1 root root 8 2007-02-07 16:01 /dev/tape -> /dev/st0 dcr1:/proc# That mt status command will show the actual st0 status when things are working correctly. The No such device message is usually the second clue that we need to reboot - the first clue is usually that the backups didn't run.

    Read the article

  • Safe mode boot with no change on screen but ongoing hard disk activity - why?

    - by omatai
    I have a machine with a dying hard drive - bad sectors are starting to multiply :-( The first sign (24 hours ago) was that it had an unmountable boot volume. At this time, I tried booting to safe mode with command prompt, which worked, after which I rebooted normally and ran a chkdsk. It has since been working as well as I could expect, but slowly getting less reliable. So I scheduled another chkdsk on both partitions (C: - boot, D: - data), having freed up a lot of space on both partitions to give Windows a little more scope for repairs (hopefully?). I then rebooted. On reboot, it protested about the unmountable boot volume again, so I booted to safe mode. I got the same list of drivers loaded as yesterday, and then no change to the screen for the past 2 hours. However, I see a flickering hard drive indicator light - not always on, but seldom ever off. What is happening? Is the chkdsk that runs in safe mode one which produces nothing on the screen and so chkdsk could be doing its thing... or is Windows still trying (but failing) to boot into Safe Mode?

    Read the article

  • Remote desktop solution where the desktop sharing party contacts the computer it wants to share with

    - by Kent
    I'm in a situation where I act as a sort of techinical support to my family and less techinically experienced friends. I'm looking for a remote desktop solution where it's possible to setup a "zero-install, double click an icon"-solution where the client computer contacts me so that I may interact with their desktop. The last part is important as the people in need of my help don't know how to configure their router or even the firewall software on their own computer. They are able to click an accept button when asked if a program should be able to make outgoing connections. They have many different kinds of routers, as well as software firewalls, and I rather not deal with the problem of how to connect to them using whatever as well as the actual problem they are having. It must be: Free of charge for non-commercial use. Possible to use it in a mode where the computer wanting to share its desktop should be able to make a connection to my computer. My computer has a DNS name we can use. Compatible with both Windows XP and Windows 7. Independent of a third party server or infrastructure. Explanations of the above: I don't want to spend money on it when I help them for free. If it's free as in freedom, all the better! I guess this boils down to being callable like showdesktopto.exe opscomputer.com where opscomputer.com is my computers DNS name. If that is possible then I can create a shortcut they can use to connect to me when they need help. It's nice if it's possible to specify a password or key file which I can use to authenticate myself, but it's not required. They use the OS which their machine comes installed with. That means Windows XP or 7. I want something which will work in the long run. Using a third party service which might not be available when I need it disqualified such solutions.

    Read the article

  • New Windows Server 2008 R2 WIMP running slower than Windows Server 2003

    - by starshine531
    We recently upgraded a WIMP server from Windows Server 2003 (32 bit) to Windows Server 2008 R2 (64 bit). The new server has significantly better hardware than the old server, yet many processes take much longer than the old box. We have a rather complex web application process that normally takes about 7 seconds on the old box, but on the new one it takes 11-12 seconds. That's down from 15.5 seconds it took before I disabled IPV6. This process involves some queries (some of them involve transactions with maybe 3 queries between the start and commit) and creating and emailing some pdfs. Windows updates are current with a more or less fresh machine. This happens consistently even when we have almost no traffic on the site and memory and cpu aren't being hard pressed at all. The only differences between the servers other than the OS and hardware: 1) When available, we used 64 bit versions of programs 2) The new server uses MySQL 5.5 rather than MySQL 5.1 (I did run the mysql_upgrade program and we use InnoDB for the engine) 3) The new server uses PHP Version 5.3.18 rather than PHP Version 5.3.1 4) With the new OS came IIS7 rather than IIS6 of course. What could be causing better hardware to run so much slower? Let me know if you need more details. Thank you.

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • ULogd2.x - Documents - IPFIX data generation

    - by Gomathivinayagam
    I would like to generate IPFIX data from the packets that are coming to my local system as part of experimentation. It seems ULogd is a good tool to do that. I am able to capture PCAP data. But there are very less documents available on ULogd2.x about IPFIX format data generation.(There are very few examples provided in ulogd.conf). Can you provide me any links that describes about how to generate IPFIX data using ulogd2.x? 1) What are the options available? I saw there is polling interval configuration. But I have no idea how does it work? 2) If I set hash_enable = 0, and uncomment the polling_interval value, I'm getting an exception as NFCT plugin requires hash table, evne though I have specified hash_buckets and hash_max_entries. Could you help on this? 3) In general, I would like to know how NFCT plugin works in ulogd2.x. I sent mail to ulogd mailing list, but there are no replies. Could you shed some light?

    Read the article

  • Creating Windows 8.1 system image error

    - by Random
    I'm experiencing "not enough space" error when trying to create system image to a USB hard drive: Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. Blah, blah... There is not enough disk space to create the volume shadow copy on the storage location. Make sure that, for all volumes to be backup up, the minimum required disk space for shadow copy creation is available. This applies to both the backup storage destination and volumes included in the backup. Minimum requirement: For volumes less than 500 megabytes, the minimum is 50 megabytes of free space. For volumes more than 500 megabytes, the minimum is 320 megabytes of free space. Recommended: At least 1 gigabyte of free disk space on each volume if volume size is more than 1 gigabyte. ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. I'd tried both - PowerShell wbAdmin start backup -backupTarget:E: -include:C: -allCritical -quiet and via Control Panel - File History button Clearly both EFI and Windows Recovery Environment partitions don't meet requirements coming from System Image tool (pic below) On top of that all system partitions are now shown as 100% free in Disk Management, it's disturbing but far from the actual state. My question is - hot to create System Image in Windows 8.1?

    Read the article

  • How to make 7-Zip faster

    - by Matt
    I normally use WinRAR over 7-Zip simply because it's faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7-Zip and WinRAR default settings on their normal compression and their best compression, and in a lot of cases WinRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7-Zip speed up? I'd like it to at least be on par with WinRAR's speed Is there a way to make recovery segments in 7-Zip like you can in WinRAR? I didn't see any, but I guess it could be a command line thing. I tested WinRAR and 7-Zip using the latest stable version of each (4-dot-something with 7-Zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core Intel i7 720 (1.6 GHz)/(2.8 GHz) with 4 GB DDR3 RAM, and the 64-bit version of 7-Zip, and dual-boot Debian x64 5.0.4 and Windows 7 Home.

    Read the article

  • SQL Server Backup File Significantly Smaller After Table Recreation

    - by userx
    We run automated weekly backups of our SQL Server. The database in question is configured for Simple Recovery. We backup using Full, not differential. Recently, we had to re-create one of our tables with data in it (making 2 varchar fields a couple characters longer). This required running a script which created a new table, copied the data over, and then dropped the old. This worked correctly. Oddly though, our weekly backup files now SHRANK by over 75%! The tables don't have large indexes. All data was copied over correctly (and verified). I've verified that we are doing full and not incremental backups. The new files restore just fine. I can't seem to figure out why the backup files would have shrank so much? I've also noticed that they get about 10 MB larger every week, even though less than that amount of data is being added. I'm guessing that I'm simply not understanding something. Any insight would be appreciated.

    Read the article

  • Load Testing a Security/Gateway Appliance

    - by Joel Coel
    In a couple weeks I will load testing a security/gateway appliance. We're a small residential college, and that "residential" means the traffic moving through the appliance is a bit like the Wild West. We have everything from Facebook to World of Warcraft, BitTorrent to Netflix, or Halo to YouTube... basically anything you might find in the home of a high-school or college aged person. Somewhere in there some real academic work gets done as well. We rely on our current appliance for traffic shaping, antivirus, malware filtering, intrusion detection on our servers, logging and abuse reporting, and even some content filtering. All this puts a decent load when we have students around, and I'm concerned about the ability of the new candidate to keep up. On paper it should handle things, but I'm worried. Prior experience is that vendors greatly over-report what an appliance can handle. The product also includes a licensed session limit, and I'm also worried that just a few misbehaving students could unwittingly bring us to that limit and cause service disruptions. I need to know this will work for our campus in order to commit to it. Going a performance level higher in that product takes the pricing way out of line with what we expect and have done in the past. What I need is a good way to load test this guy. My problem is that our current level of summer traffic is less than one percent of what it will be when students come back just six weeks from now. Any ideas on how to really stress this thing and see what it can do, in a way that will give me some clear ideas o. How that will scale for our campus? For the curious, I'm looking at a Watchguard 515, but it could be anything. If I were evaluating a competitor, I'd ask the same question.

    Read the article

  • Windows SBS 2008 problem

    - by MadBoy
    I was today on clients site that has Windows 2008 SBS installed with Symantec EndPoint Protection. Problem is that after I logged in tried multiple commands like services.msc, msconfig typed in "Run" but nothing was started. For the first 5 minutes i can click around Start Menu, choose some applications (non microsoft works, even control panel works). But then something happens that I can't click where I want.. i can click on Start Menu and get it active but i cant choose anything from there, everything is like blocked, i can right click on Desktop i can do many things but most of the left clicks is blocked. Even when i start TaskMgr i am able to see it but I cannot click it, can't activate it or anything. It acts very very weird. It's newly installed system, with less then a month of when it was installed and it wasn't really used (been down most of the time). I suspect Symantec EndPoint protection might be faulty so when I go back there (Wednesday) I will uninstall it but maybe someone else have some ideas what may be happening. I doubt there's any virus or anything, symantec was installed right after setting everything up and running.

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Tablet as Car Computer

    - by luxurychair
    Okay, so forward this off to the right place if this isn't the right place to ask this question. I want to use a tablet computer as a car-computer. Minimum features would be to run my music (through iPod, Pandora, whatever I want) and GPS Navigation, watch TV or movies while I'm parked waiting for people, and the hard one: it needs to answer my phone calls with a pleasant interface much like in-dash systems do. It needs to detect that my phone is ringing in my pocket and provide an on-screen answer/ignore and then route the audio through the cars speakers. It would be nice to dial out and have address book access, but that is somewhat secondary. I have an iPhone myself and I figured that an iPad with 3G might make a good system for this - but I'm open to other options if an iPad can't do everything I need. I'm willing to write code, and I'm willing to jailbreak one or both devices. I haven't done much work in Obj-C, but I'm not opposed to learning a new language for this project. It's self enrichment for the most part, as I'm sure I can buy an indash entertainment system for less. Does anyone have experience with the iPhone/iPad SDK that can tell me whether or not it would be possible to get it an iPad to answer my calls in the car? What about an Android tablet? (that Adam looks promising, too).

    Read the article

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >