Search Results

Search found 3000 results on 120 pages for 'computing capacity'.

Page 94/120 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Orbital equations, and power required to run them

    - by Adam Davis
    Due to a discussion on the SO IRC today, I'm curious about orbital mechanics, and The equations needed to solve orbital problems The computing power required to solve complex problems The question in particular is calculating when the Earth will plow into the Sun (or vice versa, depending on the frame of reference). I suspect that all the gravitational pulls within our solar system may need to be calculated, which makes me wonder what type of computer cluster is required, or can this be done on a single box? I don't have the experience to do a back of the napkin test here, but perhaps you do? Also, much thx to Gortok for the original inspiration (see comments).

    Read the article

  • Server configration for our website [duplicate]

    - by Varun Varunesh
    This question already has an answer here: Can you help me with my capacity planning? 2 answers We are a start-up and 6 month back we have launched our beta version website. Now we are in a phase of building our website and web-services for the final product. This website will be based on PHP, Python, MySql database and with wamp server. Right now in the beta version we are using Azure VM for hosting, with configuration of 786MB RAM and Shared CPU. We have 200 avg users daily coming to our website. Now we are trying to increase the number of users from 200 to 1500 daily users. And I am thinking our server should have capability to handle at least 100 concurrent user. Also we have developed web-services for our mobile-apps. Which can also increase loads on the sever. So here are the question that takes me here, I am pretty much confused about whether to go with shared hosting or VM based hosting. If VM, then what configuration will be best for our requirement (as I discussed above) ? Currently our VM is a Windows based server and its very simple to manage, So other than cost factor why should I go for Linux based sever? What other factor should I keep in mind while choosing the server as per our requirement ?

    Read the article

  • Semantic stuff (RDF, OWL) on mobile phones - is it possible?

    - by Brian Schimmel
    I'm thinking about using semantic (web) technogies like RDF and OWL in an application on mobile devices. Currently I'm targeting android, but I'd also be interested in the possibilities on the iPhone and on J2ME. I would like to use a lib instead of implementing everything from scratch. I know that there are some libraries and frameworks like Jena, Redland, Protégé but they don't state on which platforms they are known to work. Having a dynamic object model and parsing from and to XML are must-haves for me. I'd also like to use reasoning, but I've been told it was rather computing-intensive, so that's only a nice-to-have. For all platforms mentioned, the question can be interpreted as Is it possible in theory (especially for J2ME I'm not sure) Are there libs that are known to work on those platforms? Is the performance on a mobile platform good enough for real world usage?

    Read the article

  • What is the difference between a PDU and a power strip (both 120V, 15A)?

    - by rob
    I just chatted with an APC rep about upgrading the UPSes at our office. She recommended a single higher-capacity 6-outlet Smart-UPS to replace the four Back-UPS units we currently have. When I asked how she recommended plugging in all the current devices, she recommended using a APC's AP9567 PDU, but said not to use a power strip. At first she said I had to use an APC brand PDU, but after I inquired about using a Tripp-Lite PDU, she said any brand PDU would be fine. The APC PDU previously referenced looks like a standard 120V power strip with overload protection but no surge protection. Other than overload protection (which seems redundant if plugging into the UPS), is there something else I'm missing, or should any power strip (without surge protection) be fine? Edit: I didn't mention it earlier, but we don't have a proper rack--though I did still plan to mount the PDU or power strip to something. I guess I'm wondering if there's any special reason I should pay as much as $180 for the low-end APC PDU (which just looks like a power strip to me) vs. $20-$30 for a workbench power strip.

    Read the article

  • linux routing issue

    - by Duc To
    Hi! I have 2 linksys routers which has linux running on it and using tomato firmware.. both has internet lines plugged on but only 1 acts as DHCP server (router 1) What I am having to achieve is that all packets goes to router 1 from internal IPs want to access internet will go out to that internet line but from 1 specific port, if router 1 detects packets from a specific source port (for ex: http port: 80), it will redirect that packet to router 2 and goes out to the internet from there.. I have found some documents which give solution that I will need a linux servers with 2 ethernet cards and then we plug both internet lines on that server and routing base on it but I do not want to do that because my boss does not want to have an extra work mantaining that server, besides, he says that the router itself already a linux one so why.. I tend to agree his points.. Can it be done or a seperate linux server acting as a router is a must? Thank you all in advance and really look forward in your replies.. I am newbie to linux network and it seems to be something out of my capacity to solve :( Your sincerely! Duc To

    Read the article

  • SQuirreL Client: Opening up a table in a separate tab

    - by Dalin Seivewright
    I've switched to SQuirrel Client to view an Oracle database with a variety of tables. At any given time, I need to look at the contents of several related tables at the same time. The problem is that, at least by default, SQuirrel Client does not open up multiple tables at a time. When you click on a different Table object, the main view is refreshed with newly selected table data. Oracle's SQLDeveloper (which I am trying to move away from) did this by default if I recall, but there was an option to "freeze" panes. Does a similar option exist in SQuirrel Client? I do not require the ability to view the contents of two different tables on the same screen (i.e. a split view) but I would like the ability to have tabs for each table so I can quickly switch between each table view rather then having to find the table in the table list over and over again. Note: I am using this in a professional capacity but if it does not "belong" here then I suppose it could be moved to superuser.

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • Ubuntu "No space left on device" for /home, df shows 100% full, ds shows much, much less

    - by Jon Cram
    On an Ubuntu 12.04 server, normal users can no longer create or add to files in /home, encountering a "No space left on device" error. The /home directory has a capacity of 1.7 terabytes and as far as I can tell is nowhere near full in terms of actual data stored or inodes used. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/md2 1.0T 18G 955G 2% / udev 7.7G 4.0K 7.7G 1% /dev tmpfs 3.1G 320K 3.1G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.7G 0 7.7G 0% /run/shm cgroup 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/md3 1.7T 1.7T 0 100% /home /dev/md1 496M 45M 426M 10% /boot /home indeed looks rather full. du -hs /home suggests otherwise: 1.4G /home There appears no inode issue - df -i: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 67108864 75334 67033530 1% / udev 2013497 527 2012970 1% /dev tmpfs 2015816 440 2015376 1% /run none 2015816 2 2015814 1% /run/lock none 2015816 1 2015815 1% /run/shm cgroup 2015816 9 2015807 1% /sys/fs/cgroup /dev/md3 113909760 105981 113803779 1% /home /dev/md1 131072 239 130833 1% /boot I recently deleted a many gigabytes of application cache and log data from /home, however this was in the tens of gigabytes at best and nowhere near the capcity of /home. Update 1: du -hs --apparent-size /home 1.2G /home du -hs /home 1.4G /home What might be going on here?

    Read the article

  • No method 'get' on backbone model save

    - by user888734
    I'm using backbone for a reasonably complicated form. I have a number of nested models, and have been computing other variables in the parent model like so: // INSIDE PARENT MODEL computedValue: function () { var value = this.get('childModel').get('childModelProperty'); return value; } This seems to work fine for keeping my UI in sync, but as soon as I call .save() on the parent model, I get: Uncaught TypeError: Object #<Object> has no method 'get' It seems that the child model kind of temporarily stops responding. Am I doing something inherently wrong? EDIT: The stack trace is: Uncaught TypeError: Object #<Object> has no method 'get' publish.js:90 Backbone.Model.extend.neutralDivisionComputer publish.js:90 Backbone.Model.extend.setNeutralComputed publish.js:39 Backbone.Events.trigger backbone.js:163 _.extend.change backbone.js:473 _.extend.set backbone.js:314 _.extend.save.options.success backbone.js:385 f.Callbacks.o jquery.min.js:2 f.Callbacks.p.fireWith jquery.min.js:2 w jquery.min.js:4 f.support.ajax.f.ajaxTransport.send.d

    Read the article

  • Why might different computers calculate different arithmetic results in VB.NET?

    - by Eyal
    I have some software written in VB.NET that performs a lot of calculations, mostly extracting jpegs to bitmaps and computing calculations on the pixels like convolutions and matrix multiplication. Different computers are giving me different results despite having identical inputs. What might be the reason? Edit: I can't provide the algorithm because it's proprietary but I can provide all the relevant operations: ULong \ ULong (Turuncating division) Bitmap.Load("filename.bmp') (Load a bitmap into memory) Bitmap.GetPixel(Integer, Integer) (Get a pixel's brightness) Double + Double Double * Double Math.Sqrt(Double) Math.PI Math.Cos(Double) ULong - ULong ULong * ULong ULong << ULong List.OrderBy(Of Double)(Func) Hmm... Is it possible that OrderBy is using a non-stable QuickSort and that QuickSort is using a random pivot? Edit: Just tested, nope. The sort is stable.

    Read the article

  • Algorithm to distribute objects in a box (like InDesign, Illustrator, Draw!)

    - by Rafael Almeida
    I have a set of rectangles with their corresponding positions and a big rectangle which serves as the 'bounding box' for these rectangles. I would like to know of an algorithm that would 'distribute the free space' evenly among the rectangles. Some of you may be familiar with the Distribute Spacing option in Adobe InDesign and similar layout-oriented apps. That would be what I'm looking for. I did try looking it up, but I'm not familiar with 'graphical' algorithms terminology and trying only terms relating to 'distribute' mainly yields results about Distributed Computing. So, even the names of the algorithms or better terms to look up would be a big help. Finally, the algorithm doesn't need to be rigorously the same as InDesign's one: pretty much any algorithm that 'distributes' objects inside a region will work fine. In fact, since I'm striving for visual appeal mainly, the more suggestions the better. =D

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • Windows 7 shows a drive as full in summary but files shown on drive are very small

    - by Rob
    I have a drive partitioned so it is seen by Windows as 2 drives: C:\ and D:\ Windows 7 shows D:\ as full up in the graphical summary in 'My Computer' summary of all the drives, e.g. the bar graph indicates full and nearly all of the drive's capacity, 108Gb, is full. So I go into the D:\ drive to look at the files, I see several folders. I select them all and the right-click menu Properties to count their size, expecting the value to be about the same as what Windows reports in the summary, i.e. nearly 108Gb. But the properties window shows the files are very small, Kbs and Mbs, nowhere near 108Gbs. One of the folders is a backup, but its size is very small. I've checked the folder options to show all system files and hidden files too - and counted these in the properties. Something invisible is holding the space. What is happening here? I'm afraid to delete anything if it removes valuable backups. Have I got huge backups here? Why can't I see them? How do I see them?

    Read the article

  • How do I permanently delete /var/log/lastlog?

    - by GregB
    My /var/log/lastlog file is huge. I know it's really only a few kilobytes, but tar isn't smart enough to know that, so when I image a virtual machine, my restore fails because it thinks I'm trying to load more data than I have capacity on my disk. I want to delete /var/log/lastlog and stop any and all logging to the file. I'm aware of the security implications. This logging needs to stop to preserve my backup strategy. I've made a change to /etc/pam.d/login which I was told would disable logging to /var/log/lastlog, but it does not appear to work as /var/log/lastlog keeps growing. # Prints the last login info upon succesful login # (Replaces the `LASTLOG_ENAB' option from login.defs) #session optional pam_lastlog.so Any ideas? EDIT For anyone interested, I use Centrify Express to authenticate my users via LDAP. Centrify Express is "free", but one of the drawbacks is that I can't manage user UIDs via LDAP, so they are given a dynamic UID when they login to a server. Centrify picks some crazy high UID values (so they don't conflict with local users on the server, presumably). /var/log/lastlog is indexed by UID, and grows to accommodate the largest UID on the system. This means that when a Centrify user logs in, they get a UID in the upper-end of the UID range, which causes lastlog to allocate an obscene amount of space, according to the file system. ~$ ll /var/log/lastlog -rw-rw-r-- 1 root root 291487675780 Apr 10 16:37 /var/log/lastlog ~$ du -h /var/log/lastlog 20K /var/log/lastlog More Into --- Sparse Files

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • Efficiently compute the row sums of a 3d array in R

    - by Gavin Simpson
    Consider the array a: > a <- array(c(1:9, 1:9), c(3,3,2)) > a , , 1 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 , , 2 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 How do we efficiently compute the row sums of the matrices indexed by the third dimension, such that the result is: [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 ?? The column sums are easy via the 'dims' argument of colSums(): > colSums(a, dims = 1) but I cannot find a way to use rowSums() on the array to achieve the desired result, as it has a different interpretation of 'dims' to that of colSums(). It is simple to compute the desired row sums using: > apply(a, 3, rowSums) [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 but that is just hiding the loop. Are there other efficient, truly vectorised, ways of computing the required row sums?

    Read the article

  • Can SHA-1 algorithm be computed on a stream? With low memory footprint?

    - by raoulsson
    I am looking for a way to compute SHA-1 checksums of very large files without having to fully load them into memory at once. I don't know the details of the SHA-1 implementation and therefore would like to know if it is even possible to do that. If you know the SAX XML parser, then what I look for would be something similar: Computing the SHA-1 checksum by only always loading a small part into memory at a time. All the examples I found, at least in Java, always depend on fully loading the file/byte array/string into memory. If you even know implementations (any language), then please let me know!

    Read the article

  • How can I use Amazon's API in PHP to search for books?

    - by TerranRich
    I'm working on a Facebook app for book sharing, reviewing, and recommendations. I've scoured the web, searched Google using every search phrase I could think of, but I could not find any tutorials on how to access the Amazon.com API for book information. I signed up for an AWS account, but even the tutorials on their website didn't help me one bit. They're all geared toward using cloud computing for file storage and processing, but that's not what I want. I just want to access their API to search info on books. Kind of like how http://openlibrary.org/ does it, where it's a simple URL call to get information on a book (but their databases aren't nearly as populated as Amazon's). Why is it so damned hard to find the information I need on Amazon's AWS site? If anybody could help, I would greatly appreciate it.

    Read the article

  • Mahout Recommendations on Binary data

    - by Pranay Kumar
    Hi, I'm a newbie to mahout.My aim is to produce recommendations on binary user purchased data.So i applied item-item similarity model in computing top N recommendations for movie lens data assuming 1-3 ratings as a 0 and 4-5 ratings as a 1.Then i tried evaluating my recommendations with the ratings in the test-data but hardly there have been two or three matches from my top 20 recommendations to the top rated items in test data and no match for most users. So are my recommendations totally bad by nature or do i need to go for a different measure for evaluating my recommendations ? Please help me ! Thanks in advance. Pranay, 2nd yr ,UG student.

    Read the article

  • High speed network configuration

    - by Peter M
    Sorry if this seems to be a stupid question, I'm not sure how to specify what I want to know when checking google. I will have 2 or 3 devices pumping out data on a 100Base-T port. The combined data rate of all devices is about 15KB/S which exceeds the optimal 100Base-T channel capacity (12KB/S), but well within the realms of a 1000Base-T connection. Each device will be sending a burst of data in the form of an FTP transfer to a common, single host computer in a sequential manner ie: Device A establishes FTP connection and transfers data Device B establishes FTP connection and transfers data Device C establishes FTP connection and transfers data It may be that the A&B, B&C and C&A transfers overlap in the time domain to some extent. There will be minimal traffic going back from the computer to each device (in general what ever is needed to support the FTP transfers), and the network will be dedicated to transferring data between these devices and the host computer. Is it possible to use a switch to combine the multiple incoming 100Base-T streams into a single outgoing 1000Base-T stream? if so what features in a switch should I be looking for? Or would it be better to have 3 physical point-to-point 100Base-T dedicated connections between each device and the host computer? (thus having at least 3 physical Ethernet interfaces on that computer) Note that I can't change the interface on the devices, but I am free to choose the network and host computer configuration. Thanks for you help Peter

    Read the article

  • Ubuntu 10.04 network manager issues

    - by Shark
    I was using the default network manager to connect to my wi-fi network, but if the connection is dropped or router restarted the network manager wont reconnect automatically after i guess a couple of tries and just gives a pop-up to connect manually . To avoid this annoyance I installed WICD but though it does try to reconnect to the network after a drop in connection it is unable to resolve the ip address and i am left with an even bigger annoyance . 1. Is there a way to counter either of these issues ? 2. Something like a background process that will check network status periodically and then try to connect to a favored network ? Edit- out put of lshw -C network *-network description: Wireless interface product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:12:00.0 logical name: eth1 version: 01 serial: c0:cb:38:18:9b:7f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.60.48.36 ip=192.168.11.2 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:fbc00000-fbc03fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:13:00.0 logical name: eth0 version: 02 serial: f0:4d:a2:94:2d:74 size: 10MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:29 ioport:e000(size=256) memory:d0b10000-d0b10fff(prefetchable) memory:d0b00000-d0b0ffff(prefetchable) memory:fb200000-fb21ffff(prefetchable)

    Read the article

  • starting 64 Bit Windows Application Development

    - by user173438
    I intend to start writing a 64 Bit Scientific Computing Application (signal processing) for Windows using Microsoft Visual Studio 2008. What should I have ready as far as a development platform is concerned? How would it be different from 32 Bit development? What could be the porting issues for a 32 Bit version that I already have (ok - this might too early to ask.. even before I start compiling)? As you might have guessed, I am looking for general directions. All pointers would be much appreciated! :) Thanks in advance..

    Read the article

  • A simple algorithm for polygon intersection

    - by Elazar Leibovich
    I'm looking for a very simple algorithm for computing the polygon intersection/clipping. That is, given polygons P, Q, I wish to find polygon T which is contained in P and in Q, and I wish T to be maximal among all possible polygons. I don't mind the run time (I have a few very small polygons), I can also afford getting an approximation of the polygons' intersection (that is, a polygon with less points, but which is still contained in the polygons' intersection). But it is really important for me that the algorithm will be simple (cheaper testing) and preferably short (less code). edit: please note, I wish to obtain a polygon which represent the intersection. I don't need only a boolean answer to the question of whether the two polygons intersect.

    Read the article

  • Support-function in the GJK-algorithm.

    - by Marcus Johansson
    I am trying to implement the GJK-algorithm but I got stuck instantly. The problem is to implement the Support-function that isn't O(n^2). As it is now I'm computing the complete Minkowski difference, and then there is really no point in doing the GJK-algorithm. (or is it?) What I mean by Support-function is the function that returns the point in the Minkowski difference that is furthest away in a specified direction. I assume this shouldn't be O(n^2) as it is in my current implementation.

    Read the article

  • Can't mount hard drive. Ubuntu 12.04

    - by Sam
    I am trying to recover some pictures on my 320 GB Hard Disk, so I put in a Live Ubuntu CD and am in that right now. In the devices list, it shows my USB drive, but not my 320 GB Hard Disk. I can see the disk in Disk Utility (it says it's on /dev/sda), but it's not mounted, and it says it has a few bad sectors but it is OK. In Disk Usage Analyzer, it says my maximum capacity is 13.4 GB, so it's definitely not using the 320 GB Hard Disk. I tried the following: sudo mkdir /media/newhd (worked) sudo mount /dev/sda /media/newhd (didn't work. it says I must specify the filesystem type) I then tried: fsck.ext4 -f /dev/sda (didn't work. Said: Superblock invalid, trying to backup blocks. then: Bad magic number in super-block while trying to open /dev/sda. The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock) Does anyone have any ideas? The whole problem started when my Windows Vista said "Can't find operating system". Any ideas on how I can get on to my hard drive at /dev/sda?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >