Search Results

Search found 40159 results on 1607 pages for 'multiple users'.

Page 530/1607 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Language parsing to find important words

    - by Matt Huggins
    I'm looking for some input and theory on how to approach a lexical topic. Let's say I have a collection of strings, which may just be one sentence or potentially multiple sentences. I'd like to parse these strings to and rip out the most important words, perhaps with a score that denotes how likely the word is to be important. Let's look at a few examples of what I mean. Example #1: "I really want a Keurig, but I can't afford one!" This is a very basic example, just one sentence. As a human, I can easily see that "Keurig" is the most important word here. Also, "afford" is relatively important, though it's clearly not the primary point of the sentence. The word "I" appears twice, but it is not important at all since it doesn't really tell us any information. I might expect to see a hash of word/scores something like this: "Keurig" => 0.9 "afford" => 0.4 "want" => 0.2 "really" => 0.1 etc... Example #2: "Just had one of the best swimming practices of my life. Hopefully I can maintain my times come the competition. If only I had remembered to take of my non-waterproof watch." This example has multiple sentences, so there will be more important words throughout. Without repeating the point exercise from example #1, I would probably expect to see two or three really important words come out of this: "swimming" (or "swimming practice"), "competition", & "watch" (or "waterproof watch" or "non-waterproof watch" depending on how the hyphen is handled). Given a couple examples like this, how would you go about doing something similar? Are there any existing (open source) libraries or algorithms in programming that already do this?

    Read the article

  • Why can I view my site over a 3G connection but not through my wifi?

    - by Jonathan
    So, I am sitting in my office with four computers on the same network and internet connection. Two of the computers can visit this particular website. Two of the computer get a message "Google Chrome could not find". I have tried FF and IE also with the same problem. I can view the site 90% of the time on two of the working computers although the site seems slow and sometimes I also get the same errors as the other two computers. I have flushed the DNS, reset the router, tested the site on other peoples computers with success. Is this likely to be a site issue, an ISP issue, a hosting issue? Any advice is greatly appreciated. Here is the ping from the working machine: C:\Users\Jon>ping www.balihaicruises.com Pinging www.balihaicruises.com [208.113.173.102] with 32 bytes of data: Reply from 208.113.173.102: bytes=32 time=331ms TTL=47 Reply from 208.113.173.102: bytes=32 time=327ms TTL=47 Reply from 208.113.173.102: bytes=32 time=326ms TTL=47 Reply from 208.113.173.102: bytes=32 time=329ms TTL=47 Ping statistics for 208.113.173.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 326ms, Maximum = 331ms, Average = 328ms Traceroute: Tracing route to www.balihaicruises.com [208.113.173.102] over a maximum of 30 hops: 1 1 ms 17 ms 3 ms 192.168.1.1 2 42 ms 37 ms 36 ms 180.254.224.1 3 39 ms 47 ms 40 ms 180.252.1.69 4 36 ms 616 ms 57 ms 61.94.115.221 5 84 ms 76 ms 80 ms 180.240.191.98 6 73 ms 80 ms 72 ms 180.240.191.97 7 157 ms 143 ms 116 ms 180.240.190.82 8 115 ms 113 ms 120 ms ae1-123.hkg11.ip4.tinet.net [183.182.80.93] 9 331 ms 332 ms 335 ms xe-3-2-1.was14.ip4.tinet.net [89.149.184.30] 10 327 ms 330 ms 331 ms internap-gw.ip4.tinet.net [77.67.69.254] 11 437 ms 415 ms 350 ms border10.pc2-bbnet2.wdc002.pnap.net [216.52.127.73] 12 322 ms 823 ms 398 ms dreamhost-2.border10.wdc002.pnap.net [216.52.125.74] 13 328 ms 336 ms 326 ms ip-208-113-156-4.dreamhost.com [208.113.156.4] 14 326 ms 328 ms 336 ms ip-208-113-156-14.dreamhost.com [208.113.156.14] 15 327 ms 331 ms 333 ms apache2-udder.crisp.dreamhost.com [208.113.173.102] And then for the machine that doesn't work: C:\Users\Microsoft>ping www.balihaicruises.com Ping request could not find host www.balihaicruises.com. Please check the name and try again. C:\Users\Microsoft>tracert www.balihaicruises.com Unable to resolve target system name www.balihaicruises.com.

    Read the article

  • Pathfinding in multi goal, multi agent environment

    - by Rohan Agrawal
    I have an environment in which I have multiple agents (a), multiple goals (g) and obstacles (o). . . . a o . . . . . . . o . g . . a . . . . . . . . . . o . . . . o o o o . g . . o . . . . . . . o . . . . o . . . . o o o o a What would an appropriate algorithm for pathfinding in this environment? The only thing I can think of right now, is to Run a separate version of A* for each goal separately, but i don't think that's very efficient.

    Read the article

  • grub shows same linux image twice

    - by binW
    After a recent update, I get multiple entries for same linux kernel version in the boot menu. I have tried running update-grub2 but it also lists the same linux-image version twice i.e adnan@adnan-laptop:/boot$ sudo update-grub2 Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-26-generic Found initrd image: /boot/initrd.img-2.6.32-26-generic Found Windows 7 (loader) on /dev/sda1 Found linux image: /boot/vmlinuz-2.6.32-26-generic Found initrd image: /boot/initrd.img-2.6.32-26-generic Found memtest86+ image: /boot/memtest86+.bin done As you can see vmlinuz and initrd are found multiple times. But there is only one vmlinuz and initrd file in /boot adnan@adnan-laptop:/boot$ ls -l total 15120 -rw-r--r-- 1 root root 646144 2010-11-24 15:58 abi-2.6.32-26-generic -rw-r--r-- 1 root root 110601 2010-11-24 15:58 config-2.6.32-26-generic drwxr-xr-x 3 root root 4096 2011-01-01 18:59 grub -rw-r--r-- 1 root root 8335528 2010-12-20 23:36 initrd.img-2.6.32-26-generic -rw-r--r-- 1 root root 160280 2010-03-23 14:40 memtest86+.bin -rw-r--r-- 1 root root 2156100 2010-11-24 15:58 System.map-2.6.32-26-generic -rw-r--r-- 1 root root 1336 2010-11-24 16:00 vmcoreinfo-2.6.32-26-generic -rw-r--r-- 1 root root 4050080 2010-11-24 15:58 vmlinuz-2.6.32-26-generic Can some one tell me why does update-grub2 finds vmlinuz and initrd twice? and how to stop this from happening.

    Read the article

  • Website content hosted with Google. Good or bad?

    - by user305052
    I recently decided to host my styles.css and various scripts on Google Docs and link them into my website. I also have all my images hosted through Picasa so that they too will load much faster and consistently across users. My site has most of its traffic from Japan, Africa, and South America, so I assume there will be a performance boost for my users since my server is hosted in Hong Kong. I (in Canada) have measured my load times to be half of what they used to be. Basically it's a free CDN for my personal stuff. I'm not too sure about all of this yet, so here's my question: what are the caveats of this setup? EDIT: So after rummaging through the ToS of both Picasa and Docs, there doesn't seem to be anything wrong with this kind of use.

    Read the article

  • Can I grant permissions on files in windows 7 using a security identifier from another machine

    - by Thomas
    I have an external hard drive, and I wish to grant permissions on some files to users from 2 different computers without having to hook it up to the 2 different computers. I know the SID of the user on the other computer, I'd like to know if and how I can grant permissions to files using the SID. I'm running Windows 7 Professional 64 bits, and "The Other" computer Win 7 Home Premium 64 bits, they are not in a domain, but separate computers on a home network (not even same homegroup). Note: Duplicated question with: Is there a way to give NTFS file permissions to users from other Windows installations?

    Read the article

  • tag structured Filesystems

    - by A.Rashad
    I hope this is the correct site, I lose my way between the 4 sister sites :) Let me ask the question this way. all file systems I have seen before are hierarchical, that means a root directory, with some branched directories, and so on until we have files residing in these directories. except for AS/400 file structure, where it has a concept of a Library that serve somehow as a directory but one level only. Why not have directory-less filesystems where files are placed in a single location, but the file identifiers would be referenced by a database of tag/ file relation ships. This way there will be no need for symbolic links, one file may have multiple relations to multiple subjects, not only a single parent directory to contain. I hope the idea is clear.

    Read the article

  • What are the best strategies for selling Android apps?

    - by Rob S.
    I'm a young developer hoping to sell my apps I made for Android soon. My applications are basically 99% finished so I'm investigating what would be the best marketing strategy to use to sell my apps. I'm sure the brilliant minds here can give me some great advice. I'm particularly interested in your thoughts on the following points (especially from experienced Android developers): Is it more profitable to sell an app for free with ads or to sell an app without ads for a price? Perhaps a combination of a free ad version and a paid ad-free version? If you give away an app for free with ads on it is it ethical to decline bending over backwards to support it? How much does piracy actually affect potential sales? Should any effort be put towards preventing it? Can you still make a profit off your application if you make it open source? Could you perhaps make more of a profit from the attention you would get by doing so? Is Google's Android Marketplace really the best place to release Android apps? It is worthwhile enough to maintain a developer blog or website to keep users updated on your development progress and software releases? Any other suggestions you could give me to maximize profit meanwhile keeping users happy and coming back for more would also be greatly appreciated. While I appreciate general tips and tricks, I'd like to ask that if possible you please go the extra step and show how they specifically apply to selling Android apps. Marketing statistics, developer retrospect, and any additional experience you can share from your time selling Android apps is what I would love to see most. Thank you very much in advance for your time. I truly appreciate all the responses I receive.

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • Does RDNS for mail server have to match the mail server hostname exactly?

    - by threecheeseopera
    Typically when setting up a mail server, I create an rDNS record for the mail server IP to match the mail server hostname (ex: mail.example.com). Can I instead set the rDNS ptr to match the parent domain (e.g. example.com), if this server is being used for multiple purposes, and still send mail successfully (i.e. not be classified as spam b/c of mismatched rDNS)? Thanks! EDIT: The article at http://en.wikipedia.org/wiki/Forward_Confirmed_reverse_DNS seems to indicate that it might be more complicated than I had thought. For instance, 1) I did not know that you could have multiple PTR records for a given IP; 2) it appears that as long as each PTR record matches an A record, everything is good (basically nullifying my question). Would you agree?

    Read the article

  • Using LDAP as auth method for git repositories

    - by Lenni
    I want to convince my boss that we should be using git for version control. He says, that it absolutely must authenticate users through our central LDAP server. I looked at the various solutions (gitweb, gitorious ... ) and couln't really find a definitive answer about whether they support LDAP authentication. The only solution I could find a little info on was a Apache+mod_ldap setting. But that would mean that the user authenticating on LDAP wouldn't necessarily be the same as the actual git user, right? (Not that this is a huge problem, but just something which would bug me.) So, what's the best way to authenticate git users via LDAP?

    Read the article

  • windows service application run fine on windows XP but crashes on windows7

    - by Abbas Siddiqui
    I am sorry If my question asked before, I search extensively but didn't found. If present please post the link of that question. I have developed windows service that works fine on windows xp , when I installed it on windows7 it installed and works fine for few minutes, after that is crashes and gives the following error message. has stopped working windows is checking for the solution to the problem. the log entry is as follows Fault bucket 1155193276, type 5 Event Name: CLR20r3 Response: Not available Cab Id: 0 Problem signature: P1: windowsserviceapp.exe P2: 1.0.0.0 P3: 4bf29a85 P4: System.Windows.Forms P5: 2.0.0.0 P6: 4a275ebd P7: 16cf P8: 159 P9: System.ComponentModel.Win32 P10: Attached files: C:\Users\DELL\AppData\Local\Temp\WERF98D.tmp.WERInternalMetadata.xml These files may be available here: C:\Users\DELL\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_windowsserviceap_89ea5da5168ff1535681aa613b5f7bf2b1636dc_111d24f1 Analysis symbol: Rechecking for solution: 0 Report Id: 24dc8c83-62a1-11df-b1ee-00271352d813

    Read the article

  • Chatbox/shoutbox/forum with the following features.

    - by Mick
    I would like to set up a simple forum for a small number of users. The features I would like are the following, in order of priority: logging on not compulsory for posting. A single thread. All the messages are immediately visible - no need to "open" them. most recent message at the top by default. The ability to give selected users the power to delete spam messages. The ability to upload photos. Avatars. Configurable skins/size/shape. So far the closest I have come is www.tag-world.com but there are no avatars and no photos.

    Read the article

  • Is djvubundle available in Ubuntu?

    - by Tim
    The official webpage says Assembling DjVu Images into Multipage Documents The batch compressors distributed as part of the DjVuText and DjVuLayered packages can directly produce multipage DjVu file when fed with multiple input files. The files produced are smaller than if the pages are compressed separately because the compressor can extract and share redundant information accross multiple pages. Individually compressed DjVu pages can be assembled into multipage documents using the free package DjVuMulti. To assemble a bunch of DjVu images into a single BUNDLED document simply type: djvubundle page1.djvu page2.djvu.... pageN.djvu document.djvu To assemble a bunch of DjVu images into an INDIRECT document, type: djvujoin page1.djvu page2.djvu.... pageN.djvu documentdir/index.djvu where documentdir must be an existing directory where all the individual page files will be copied. To disassemble a BUNDLED document into an INDIRECT one, simply say: djvujoin document.djvu documentdir/indexfile.djvu To convert a multipage document from one of the old 2.0 multipage formats, do djvureindex olddocument newdocument The programs djvujoin, and djvubundle supersede the 2.0 programs djvuindex and djvumerge. I couldn't find djvujoin and djvubundle for Ubuntu. djvulibre doesn't have them either. Do I miss something? Thanks.

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • Centos 6 vsftp server

    - by henry
    I have installed vsftpd server in my centos 6 server. I created three users.first user can access the ftp server using his system password.All the users are in chroot_list. when the second user is trying to access through ftp with his password, one error message as follows: "operation not supported " my server's selinux configuration: [henry@admin ~]$ getsebool -a | grep ftp allow_ftpd_anon_write --> off allow_ftpd_full_access --> off allow_ftpd_use_cifs --> off allow_ftpd_use_nfs --> off ftp_home_dir --> on ftpd_connect_db --> off httpd_enable_ftp_server --> off sftpd_anon_write --> off sftpd_enable_homedirs --> off sftpd_full_access --> off sftpd_write_ssh_home --> off tftp_anon_write --> off How can I troubleshoot this issue?

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • Windows Hosted Network Redirect to IIS

    - by rulestein
    I would like to setup a Windows 7 machine as a wifi hotspot that always redirects to the IIS web hosting on the same machine. I have the hotspot piece working with the built in hosted network of Widnows 7. The webhosting was easy enough with IIS. Now, how do I connect the 2? The idea is to have a standalone device that users will be able to connect to the wifi and any webpage they go to will redirect to the internal webpage. I only expect 1 or 2 users at a time and there won't be any internet access involved.

    Read the article

  • Updating the $PATH for running an command through SSH with LDAP user account

    - by Guillaume Bodi
    Hi all, I am setting up a Mac OSX 1.6 server to host Git repositories. As such we need to push commits to the server through SSH. The server has only an admin account and uses a user list from a LDAP server. Now, since it is accessing the server through a non interactive shell, git operations are not able to complete since git executables are not in the default path. As the users are network users, they do not have a local home folder. So I cannot use a ~/.bashrc and the like solution. I browsed over several articles here and there but could not get it working in a nice and clean setup. Here are the infos on the methods I gathered so far: I could update the default PATH environment to include the git executables folder. However, I could not manage to do it successfully. Updating /etc/paths didn't change anything and since it's not an interactive shell, /etc/profile and /etc/bashrc are ignored. From the ssh manpage, I read that a BASH_ENV variable can be set to get an optional script to be executed. However I cannot figure how to set it system wide on the server. If it needs to be set up on the client machine, this is not an acceptable solution. If someone has some info on how it is supposed to be done, please, by all means! I can fix this problem by creating a .bashrc with PATH correction in the system root (since all network users would start here as they do not have home). But it just feels wrong. Additionally, if we do create a home folder for an user, then the git command would fail again. I can install a third party application to set up hooks on the login and then run a script creating a home directory with the necessary path corrections. This smells like a backyard tinkering and duct tape solution. I can install a small script on the server and ForceCommand the sshd to this script on login. This script will then look for a command to execute ($SSH_ORIGINAL_COMMAND) and trigger a login shell to run this command, or just trigger a regular login shell for an interactive session. The full details of this method can be found here: http://marc.info/?l=git&m=121378876831164 The last one is the best method I found so far. Any suggestions on how to deal with this properly?

    Read the article

  • How to increase max FD limit for a daemon process running under a headless user?

    - by Ameliorator
    To increase the FD limit for a daemon process running under a headless user on a Ubuntu Linux machine we did following changes in /etc/security/limits.conf soft nofile 10000 hard nofile 10000 We also added session required pam_limits.so in /etc/pam.d/login. The changes got reflected for all the users who logged out and logged in again. Whatever new processes are starting under those users are getting new FD limits. But for the daemon which is running under a headless user the changes are not getting reflected. what is the way by which the changes can be reflected for the daemon which is running under headless user ?

    Read the article

  • Recommendations for Spam Filter

    - by dotdev
    We are currently using MxGuardDog for spam filtering. It works by pointing our MX records at their mail servers. The service seems pretty good, it keeps out the obvious spam, but I would still say it let's through mail that to me is spam, but I accept that on the surface those emails may not flag any of the universally recognised indicators for spam. If an email comes through that I believe is spam, I can login to the Web Console and blacklist the email/domain. However, 99% of the time I don't because it's inconvenient - or, should i say, it's far less convenient than a button in Outlook that allows me to report the email/domain as spam. So, what we're looking for is a similar service i.e. cloud spam filtering that has an Outlook plugin so that Administrators/Users can report spam. We are only a small company, 10 users, so cost is of course an issue for us. Many thanks dotdev

    Read the article

  • Would a Socket Connection Outperform an Intarvaled Database Sweep and Requests?

    - by Jascha
    I'm building a small chat application to add to an existing framework. There will only be 20-50 users MAX at any one time. I was wondering if I could get away with updating a cache file containing (semi) live chat data for whichever users happen to be chatting just by performing timed queries and regular AJAX refreshes for new data as opposed to learning how to open and maintain a socket connection. I'm sure there are existing chat plug-ins out there. But I just had a hell of a time installing one and I could see building the whole damn thing taking just as much time as plugging one in. Am I off to a bad start? Thanks in advance -J (p.s. this is a semi closed network behind a php login so security isn't a great concern)

    Read the article

  • Ubuntu Server 12.04 CPU Load

    - by zertux
    I have a Server (2x Hexa-Core Xeon E5649 2.53GHz w/HT with 32GB RAM and 20000 GB Bandwidth) running Ubuntu Server 12.04 LTS. The server runs LAMP and serves one website only, the estimated number of users is to be ~ 15,000 at the same time. At the moment i have around 2000 users online each of them runs 50 MySQL queries (small values mostly select and insert) from the beginning until the end of the session. Server CPU Load is high at this number of connections while the RAM usage is almost 1GB out of 32GB its worth mentioning that the server was running very fast with no problems at all but am concerned about the load average. http://s12.postimage.org/z7hi6mz3h/photo.png top - 03:02:43 up 9 min, 2 users, load average: 50.83, 30.14, 12.83 Tasks: 432 total, 1 running, 430 sleeping, 0 stopped, 1 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 66.5%id, 33.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32939992k total, 3111604k used, 29828388k free, 84108k buffers Swap: 2048280k total, 0k used, 2048280k free, 1621640k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2860 root 20 0 25820 2288 1420 S 3 0.0 0:11.18 htop 1182 root 20 0 0 0 0 D 2 0.0 0:01.46 kjournald 1935 mysql 20 0 12.3g 161m 7924 S 1 0.5 102:31.45 mysqld 11 root 20 0 0 0 0 S 0 0.0 0:00.38 kworker/0:1 1822 www-data 20 0 247m 25m 4188 D 0 0.1 0:01.81 apache2 2920 www-data 20 0 0 0 0 Z 0 0.0 0:01.20 apache2 <defunct> 2942 www-data 20 0 247m 23m 3056 D 0 0.1 0:00.20 apache2 3516 www-data 20 0 247m 23m 3028 D 0 0.1 0:00.06 apache2 3521 www-data 20 0 247m 23m 3020 D 0 0.1 0:00.09 apache2 3664 www-data 20 0 247m 23m 3132 D 0 0.1 0:00.09 apache2 3674 www-data 20 0 247m 23m 3252 D 0 0.1 0:00.06 apache2 3713 www-data 20 0 247m 23m 3040 D 0 0.1 0:00.09 apache2 1 root 20 0 24328 2284 1344 S 0 0.0 0:03.09 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.01 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 9 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 root@server:~/codes# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 19 0 0 29684012 86112 1689844 0 0 19 590 254 231 48 0 47 5 23 0 0 29704812 86128 1697672 0 0 4 320 11100 8121 77 1 22 0 33 0 0 29671044 86156 1705308 0 0 0 5440 13190 9140 95 1 4 0 33 3 0 29670088 86160 1706288 0 0 0 32932 12275 7297 99 0 1 0 35 0 0 29693456 86188 1710724 0 0 4 676 12701 7867 98 1 1 0 ^C I have not changed any of the default configurations that comes with Ubuntu. Is this load normal for such powerful server ? is there any optimization i can make to Apache/MySQL to minimize the load ? What do you recommend ?

    Read the article

  • 10 gigabit or 1 gigabit switch

    - by Guntis
    We are planning to move mysql to dedicated box. At this moment we have web servers and mysql is running on each. Question is: cheaper is to buy 10G switch and put 10G network card into mysql server. Or buy normal gigabit switch and connect mysql box to switch with multiple network cables. In 1G scenario then we give each web server different mysql IP address. I don't think, that mysql box with one 1G link is enough to to satisfy multiple web box mysql traffic. At this moment we have 3 servers witch are running mysql/web. Plan is to add fourth server for mysql only. Thanks. Edit: if we buy 1G switch with mini-GBIC ports. Can we put in mini-GBIC 10G connectors and then connect mysql box to that port?

    Read the article

  • What equipment do real ISP's use?

    - by Allanrbo
    In a dormitory of 550 residents, people often mistakenly set up DHCP servers for the whole network by plugging in their private Wi-Fi routers wrongly. Also recently, someone mistakenly configured their PC to a static IP being the same as that of the default gateway. We use cheap 3Com switches at the moment. I know that Cisco switches support DHCP snooping to solve the DHCP problem, but that still does not solve the default gateway IP takeover problem. What sort of switch equipment do real ISP's use so their customers cannot break the network for the other customers? What we ended up doing In case anyone are courious, we ended up doing seperate VLANs for each user. And as a matter of fact, not just the 550 users, but for 2500 users (11 dorms). Here's a page describing the setup: http://k-net.dk/technicalsetup/ (the section "Transparent firewall using VLANs"). There was no significant load on the router server as I feared in one of the comments below. Even at 800Mpbs.

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >