Search Results

Search found 4278 results on 172 pages for 'capacity planning'.

Page 11/172 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How do I break down and plan a personal programming project?

    - by Pureferret
    I've just started a programming job where I'm applying my 'How to code' knowledge to what I'm being taught of 'How to Program' (They are different!). As part of this, I've been taught how to capture requirements from clients before starting a new project. But... How do I do this for a nebulous personal project? I say nebulous, as I often find halfway through programming something, I want to expand what my program will do, or alter the result. Eventually, I'm tangled in code and have to restart. This can be frustrating and off-putting. Conversely, when given a fixed task and fixed requirements, it's much easier to dig in and get it done. At work I might be told "Today/This week you need to add XYZ to program 1" That is easy to do. At home (for fun) I want to make, say, a program that creates arbitrary lists. It's a very generic task. How do I start with that? I don't need it to do anything, but I want it to do something. So how do I plan a personal programming project? Related: What to plan before starting development on a project?

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Path of Replication

    - by geeko
    I'm currently developing a replication system to keep data in-synch between an arbitrary number of servers. Some of these servers exist in one cluster on one LAN. Others exist somewhere else in the world. I'm wondering what are the pros/cons of different paths that we choose to flow replicated data on between servers? In other words, what are the different strategies to load balance the replication process ?

    Read the article

  • Many user stories share the same technical tasks: what to do?

    - by d3prok
    A little introduction to my case: As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the function call mechanism would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. Coming to the problem: The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of "function call mechanism" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! I've proposed a solution, but I'm absolutely not satisfied: We created a "Design story" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them). What would you do, or even better what have you done, in situations like this? (a small foot-note: following a suggestion I've moved this question from stackoverflow)

    Read the article

  • Release roadmap with scrum

    - by SyBer
    I need to prepare an internal product release road-map for product being built via scrum methodology, and have some difficulty correlating sprints to the road-map. The main problem is that as I don't have effort estimations for every story, because these prepared immediately before each sprint, so I don't know what will make into which sprint. I'm fine with changing the road-map as the development goes on, but need it to give at least some indication when things planned to be released. So what would be the best way to do this, other then guestimating the whole backlog? Thanks for any idea.

    Read the article

  • Help with off-game tasks

    - by peoro
    I love writing video games for fun, and often do that. I noticed, anyway, that most of the times implementing the gameplay itself doesn't take too much time to me (maybe because I already did that plenty times and know what and how to do for most of the things), but when I try to implement off-game stuff I get lost. By off-game I mean what is not gameplay: menus, cutscenes between levels, world map to choose levels, saving and loading status, managing replays ... Only tried to write a few of these a few times, but always failed; that's why I never really completed and distributed a game. Are these common problems? And where should I start to do this? Where could I find some books/guides about such stuff?

    Read the article

  • Netbook performs hard shutdown without warning on low battery power

    - by Steve Kroon
    My Asus EEE netbook performs a hard shutdown when it reaches low battery power, without giving any warning - i.e. the power just goes off, without any shutdown process. I can't find anything in the syslog, and no error messages are printed before it happens. I've had this problem on previous (K)Ubuntu versions, and hoped updating to Ubuntu Precise would help resolve the issue, but it hasn't. The option in the Power application for "when power is critically low" is currently blank - the only options are a (grayed-out) hibernate and "Power off". I have re-installed indicator-power to no effect. The time remaining reported by acpi is unstable, as is the time remaining reported by gnome-power-statistics. (For example, running acpi twice in succession, I got 2h16min, and then 3h21min remaining. These sorts of jumps in the remaining time are also in the gnome-power-statistics graphs.) It might be possible to write a script to give me advance warning (as per @RanRag's comment below), but I would prefer to isolate why I don't get a critical battery notification from the system before this happens, so that I can take action as appropriate (suspend/shutdown/plug in power) when I get a notification. Some additional information on the battery: kroon@minia:~$ upower -i /org/freedesktop/UPower/devices/battery_BAT0 native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/PNP0C0A:00/power_supply/BAT0 vendor: ASUS model: 1005P power supply: yes updated: Fri Aug 17 07:31:23 2012 (9 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging energy: 33.966 Wh energy-empty: 0 Wh energy-full: 34.9272 Wh energy-full-design: 47.52 Wh energy-rate: 3.7692 W voltage: 12.61 V time to full: 15.3 minutes percentage: 97.248% capacity: 73.5% technology: lithium-ion History (charge): 1345181483 97.248 charging 1345181453 97.155 charging 1345181423 97.062 charging 1345181393 96.970 charging History (rate): 1345181483 3.769 charging 1345181453 3.899 charging 1345181423 4.061 charging 1345181393 4.201 charging kroon@minia:~$ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok charging state: charging present rate: 332 mA remaining capacity: 3149 mAh present voltage: 12612 mV kroon@minia:~$ cat /proc/acpi/battery/BAT0/info present: yes design capacity: 4400 mAh last full capacity: 3209 mAh battery technology: rechargeable design voltage: 10800 mV design capacity warning: 10 mAh design capacity low: 5 mAh cycle count: 0 capacity granularity 1: 44 mAh capacity granularity 2: 44 mAh model number: 1005P serial number: battery type: LION OEM info: ASUS

    Read the article

  • What happens to the storage capacity when I uninstall Ubuntu?

    - by shole1202
    I used the wubi installer for Ubuntu 12.04. After having trouble with getting the Operating System to boot, I tried uninstalling it with wubi. From 'My Computer' (in Windows 7), I noticed the maximum capacity of my hard drive drop from 256gb to 238gb. I have tried using some methods with the command prompt to locate the missing storage, but Windows now only recognizes that the storage on the disk to have 238gb instead of the original 256. Is there any way to recover that memory?

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • how much more memcache memory do i need to get 95% hit ratio? [on hold]

    - by OneSolitaryNoob
    I have a memcache instance running that has a 90% hit ratio. How can I estimate how much more memory it needs to get to a 95% hit ratio? edit: This question was blocked, but I do not think this is impossible to answer. After all, anyone that's used a caching system HAS answered this question, most likely with trial&error&luck. I can look at my usage patterns. I can increase or decrease memory and see how hit rate changes. Both of these provide data that informs an estimate. But what's a good/better/best way to do this?

    Read the article

  • Is it possible create a 4TB bootable partition in the x86 edition of Windows Server 2003 Enterprise?

    - by Giffyguy
    I'd like to find out if there is any way to accomplish this, since it would benifit my storage server greatly. I am using a Promise FastTrak 8660 and five Seagate ST31000340NS 1TB drives in a RAID 5 array. I figure that if the x86 ENTERPRISE edition of Server 2003 can handle 64GB of RAM, it should have no problem supporting larger HDD volumes as well. I've read (somewhere...) that the Windows Server operating systems are not limited to the standard 2TB like Windows XP and 2000 are. I'm hoping it's something that just needs to be turned on, similar to the way PAE works for the 4GB RAM limit in x86 servers.

    Read the article

  • Increasing a Linux partition once VM size increased in vSphere?

    - by dannymcc
    I have a Ubuntu 12.04 VM running on VMWares ESXi 5.1. The server (VM) itself has run out of space, the results of df -h are as follows: Filesystem Size Used Avail Use% Mounted on /dev/sda1 19G 17G 1.2G 94% / udev 490M 4.0K 490M 1% /dev tmpfs 200M 232K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 498M 0 498M 0% /run/shm The original VM HDD size was just under 19GB which is I have now increased to 100GB within the vCenter GUI: Is there a simple way of doing this? The VM doesn't seem to acknowledge the increase at all.

    Read the article

  • Website Reference about Server Placement

    - by Manuel Faux
    I have to do a student research project about "Server Placement in a Server Room". The paper should contain something like "place the racks about 3 meters away from any wall", "mind the maximum capacity load of the (false) floor" and other placement strategies. I have been searching for a while, but I did not find any reliable reference I can use in my work. Does anyone know some useful websites about server placement?

    Read the article

  • How many websites can my server potentially hold?

    - by Daniel Kindler
    Sorry for the "noob" question, but... About how many medium-sized websites with average traffic could this server hold? Just like the average website, kind of like a small business site. How many sites could this server hold, but still maintain nice, decent speed? PowerEdge R510 PE R510 Chassis for Up to Four 3.5" Cabled Hard Drives, LED edit Processor Intel® Xeon® E5630 2.53Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem edit Memory 8GB Memory (4x2GB), 1333MHz Single Ranked UDIMMs for 1 Procs, Optimized edit Operating System SUSE Linux Enterprise Server 10, SP3, Up To 32 CPU Lic, 1 YR Sub, DIB, Media edit Red Hat Enterprise Linux Licensing Hard Drives 250GB 7.2K RPM SATA 3.5" Cabled Hard Drive edit Hard Drives 1TB 7.2K RPM SATA 3.5" Cabled Hard Drive edit Hard Drives 2 X 2TB 7.2K RPM SATA 3.5in Cabled Hard Drive Hard Drive Configuration No RAID, Embedded SATA Controller for x4 Chassis edit Power Supply 480 Watt Non-Redundant Power Supply edit Thank you!

    Read the article

  • Why aren't there 8gb RAM modules yet?

    - by user49951
    Why is RAM module development seemingly stuck at the same size for a while now (a couple of years)? I bought 2x2gb modules 2 years ago, and now it's all the same size, with prices even bigger. I want more memory, because I work a lot on my computer and I just need it. What is going on? Hardware/memory progress was being made constantly until these couple of years, and I'm a big computer user for over 15 years. Why isn't here 4gb/8gb modules yet? I would gladly replace my DDR2 motherboard for a DDRX one if it had at least 4gb DDRX modules for a reasonable price. Now we have a situation with very cheap usb drives reaching 64gb size, and a ram modules with pathetic 2gb size. Sounds like some sort of conspiracy.

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Sizing Switches for Storage and Production

    - by Untalented
    Couple questions. Should you always completely separate the storage network switches from production switches or are VLANs fine to segment this traffic? Is there a golden rule here? How do you properly size a switch for your environment based on the specifications the manufacturer provide (Throughput, Forwarding Throughput, Stacking Throughput, Max Mac)? If you have two switch options and one has a maximum Mac address of 8,000 vs. another with 16,0000. What does this really mean to me? How do make sure one vs. another is sized properly for me? Besides VLAN and Jumbo Frame support, is there any other "Must" haves for a virtual environments production or storage networks? There is a wealth of knowledge on sizing SANs and such, but this seems equally important and it's quite challenging to find as much information. -- Just to add some tidbits of information for the environment. This setup above is referring to the data centers which supports two different locations which have about 100 users between the two in total. The storage traffic will be iSCSI and will be 3 ESXi Hosts and one SAN housing about 2.7TB of data. Since there is currently no storage network in place (no SAN), I'm having a hard time regarding #2 to really determine what backplane throughput and switch specifications will be sufficient.

    Read the article

  • Watch the Dec. 8 Webcast: Oracle CFO Discusses Planning and Forecasting

    - by Theresa Hickman
    Watch CFO,com's webcast featuring Oracle's CFO, Jeff Epstein, discuss how CFOs and CIOs lead their organizations to better planning, forecasting and performance. Date: Weds. December 8, 2010 Time: 2:00 PM E.S.T Duration: 30 mins In this webcast, Celina Rogers, director of research with CFO Research Services, summarizes the latest findings from a fall 2010 survey surrounding the issues regarding timely, accurate and relevant forecasting and planning. Included in this webcast, you will hear firsthand from Jeff Epstein, CFO of Oracle, on how a senior finance leader can partner successfully with IT to support growth during the course of the economic recovery. Click here, to register for this webcast

    Read the article

  • Why is there a 20 and not 21 in some versions of Planning Poker?

    - by SuffixTreeMonkey
    In Planning Poker, cards usually contain numbers of the Fibonacci sequence, which is 0,1,1,2,3,5,8,13,21,34,55 etc. However, you can see on the Wikipedia page (and this has been confirmed to me by people that work at several positions where Planning Poker is applied) in some editions the cards stray away from Fibonacci sequence after 13. They lower 21 to 20 and then continue with 40 and 100. Is there some rationale on why these values have been changed, specifically 21 to 20? (Also note that some other cards are added, such as ? and 1/2, but these are easier for me to understand, compared to the 21 - 20 shift.)

    Read the article

  • Network cabling with multiple patch panels?

    - by dannymcc
    I am in the very early stages of planning a network cabling upgrade in our office, mainly to upgrade the old cables from Cat5 to either 5e or 6. I am also planning on upgrading all of our 10/100 switches to 10/100/1000 switches. I would like to have three small wall mounted cabinets spread around the building, each with a patch panel and switch. These would all lead back to our server room. The question is; should I have two patch panels in each wall cabinet, one with 24 or 48 ports that are connected to a matching patch panel in the server room. The second patch panel would then link to each device in that cabinets area. Then I wouldn't put a switch in the small cabinets. All switching would be done in the server room. Or, should I have one main cable from the server room to each of the cabinets - plugged straight into the switch and the patch panel is for devices in the cabinets area? I hope that makes sense!

    Read the article

  • How to shrink-to-fit an std::vector in a memory-efficient way?

    - by dehmann
    I would like to 'shrink-to-fit' an std::vector, to reduce its capacity to its exact size, so that additional memory is freed. The standard trick seems to be the one described here: template< typename T, class Allocator > void shrink_capacity(std::vector<T,Allocator>& v) { std::vector<T,Allocator>(v.begin(),v.end()).swap(v); } The whole point of shrink-to-fit is to save memory, but doesn't this method first create a deep copy and then swaps the instances? So at some point -- when the copy is constructed -- the memory usage is doubled? If that is the case, is there a more memory-friendly method of shrink-to-fit? (In my case the vector is really big and I cannot afford to have both the original plus a copy of it in memory at any time.)

    Read the article

  • CPU running at full capacity when boot to DOS?

    - by Kevin H
    Does the CPU is run at 100% or near full capacity when the computer is booted into MS-DOS? Will the CPU temperature become higher even though we are not running any program in DOS mode? In Windows, we can see the CPU usage in % of utilization in Task Manager. From what I heard, CPU is running at near 100% capacity in DOS OS or in the BIOS MAIN screen. Is this caused by lack of CPU optimization in DOS OS?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >