Daily Archives

Articles indexed Thursday July 5 2012

Page 1/17 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Current Technologies

    - by Charles Cline
    I currently work at the University of Kansas (KU) and before that Stanford University, to be particular the Stanford Linear Accelerator Center (SLAC).  Collaborating with various Higher Ed institutions the past several years has shown a marked increase in the Microsoft side of the house.  To give you an idea of our current environment, here are some of the things we (Enterprise Systems) have been working on the past two years I’ve been at KU: Migrated from Novell to Active Directory (AD), although we’re still leveraging Novell for IDM.  We currently have 550,000+ objects in AD, and we still have several departments to bring in. Upgraded from Exchange 2003 to Exchange 2010 and Forefront Online Protection for Exchange (FOPE) Implemented SCCM 2007 for Windows systems management Implemented central file storage using EMC products for the backend, using CIFS as the frontend Restructuring AD domains and Forests to decrease the administrative overhead and provide a primary authentication mechanism for the entire University Determining Key Performance Indicators for AD and Exchange Implemented SCOM 2007 to monitor AD and Exchange Implemented Confluence for collaboration within IT and other technology providers at the University Implemented Data Protection Manager (DPM) for backup of AD and Exchange Built a test and QA environment to better facilitate upcoming changes to the environment Almost ready to raise the AD domain level to 2008 R2   I’m sure I’m missing things, and my next post will be some of the things we’re getting ready for – like Centrify to provide AD for OS X and Linux systems.  If anyone would like more info on a particular area, please drop me a line.  I’d be happy to discuss.

    Read the article

  • Independence Day for Software Components – Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • Apache HTTP Server - Other local network pc can't access web application [closed]

    - by Manellen
    I have problem while accessing my web application through other pc via LAN. Here is the scenario: I have 3 computers directly connected to LAN and one computer has installed Apache HTTP Server also has a web application. 2 Other computers try to access wep application to the installed Apache HTTP Server computer and that 2 computers can't access the web application. That 2 computer displaying Does anyone know how to fix it? (fix how to make that 2 computer could access the web application through LAN) Thanks in advance.

    Read the article

  • Ububtu server 12.04 auto installation freezes at kickseeding running if ks.cfg has post scripts

    - by john206
    I'm trying to make a custom Ubuntu Server iso file. Kickstart file (ks.cfg) runs smooth when there is no %post in the file and Ubuntu installs correctly with ks configuration. Installation finishes installing base, apt, grub and It echos: Kickseed Running... and it freezes @ 0% I thought may be apt-get update doesnt work in ks file, I tried to install other apps like apache2 but no luck I have created dozen iso images and installed them in Virtual Box.I have been googling for 3 days and checked out ubuntu forums but haven't figured out the issue. I appreciate your help. This is how I made the iso image. My ks.file and txt.cfg files located in isolinux directory: root@ubuntu:/home/work mount -o loop ubuntu-12.04-amd64.iso original-iso/ rsync -a original-iso/ custom-iso/ cp ks.cfg custom-iso/isolinux/ cp txt.cfg custom-iso/isolinux/ chmod -R 777 custom-iso/ #Creating Iso image mkisofs -D -r -V “$IMAGE_NAME” -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ~/ubuntu-12.04-alternate-custom-amd64.iso custom-iso/ ks.cfg #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone America/Los_Angeles #Root password rootpw --iscrypted somethingsomething #Initial user user ubuntu --fullname "ubuntu" --iscrypted --password somethingsomething. #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use CDROM installation media cdrom #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --size 128 --fstype=ext3 --asprimary part / --size 512 --fstype=ext3 --asprimary part swap --size 512 part /tmp --size 512 --fstype=ext3 part /var --size 512 --fstype=ext3 part /usr --size 4096 --fstype=ext3 part /home --size 2048 --fstype=ext3 #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --disabled --http --ftp --ssh #X Window System configuration information xconfig --depth=32 --resolution=1024x768 --defaultdesktop=GNOME %post apt-get update mkdir /home/user txt.cfg default autoinstall label autoinstall menu label ^Install Custom Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz quiet ks=cdrom:/isolinux/ks.cfg -- label install menu label ^Install Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz quiet -- label cloud menu label ^Multiple server install with MAAS kernel /install/vmlinuz append modules=maas-enlist-udeb vga=788 initrd=/install/initrd.gz quiet -- label check menu label ^Check disc for defects kernel /install/vmlinuz append MENU=/bin/cdrom-checker-menu vga=788 initrd=/install/initrd.gz quiet -- label memtest menu label Test ^memory kernel /install/mt86plus label hd menu label ^Boot from first hard disk localboot 0x80

    Read the article

  • mount.nfs: access denied by server while mounting (null), can't find any log information

    - by Mark0978
    Two ubuntu servers: 10.0.8.2 is the client, 192.168.20.58 is the server. Between the 2 machines, Ping works, ssh works (in both directions). From 10.0.8.2 showmount -e 192.168.20.58 Export list for 192.168.20.58: /imr/nfsshares/foobar 10.0.8.2 mount.nfs 192.168.20.58:/imr/nfsshares/foobar /var/data/foobar -v mount.nfs: access denied by server while mounting (null) Found several things online, tried them all and still can't find any log information anywhere. On the server: [email protected]:/var/log# cat /etc/hosts.allow sendmail: all ALL: 10.0.8.2 /etc/hosts.deny is all comments How can I get a trail of log statements to figure this out? What does it take to get some logging so I have some idea of WHY it won't mount? On the server: [email protected]# nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 21:16 CDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.0000060s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open unknown 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open unknown 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 3.81 seconds From the client: [email protected]:~$ nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 22:14 EDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.73s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open rpcbind (rpcbind V2) 2 (rpc #100000) 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open nfs (nfs V2-4) 2-4 (rpc #100003) 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 191.56 seconds

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • Postgres 9.0 locking up, 100% CPU

    - by Jake
    We are having a problem where our Postgres 9.0 server occasionally locks up and kills our webapp. Restarting Postgres fixes the problem. Here's what I've been able to observe: First, usage of one CPU jumps to 100% for a few minutes Disk operations drop to ~0 during this time Database operations drop to 0 (blocks and tuples per sec) Logs show during this time: WARNING: worker took too long to start; cancelled WARNING: worker took too long to start; cancelled No Queries in logs (only those over 200ms are logged) No unusually long-running queries logged before or during Then the second CPU jumps to 100% The number of postgres processes jumps from the usual 8-10 to ~20 Matched by a spike in Postgres Blocks per second (about twice normal) Logs show LOG: could not accept SSL connection: EOF detected Queries are running but slow Restarting postgres returns everything to normal Setup: Server: Amazon EC2 Large Ubuntu 10.04.2 LTS Postgres 9.0.3 Dedicated DB server Does anyone have any idea what's causing this? Or any suggestions about what else I should be checking out?

    Read the article

  • How to point a Wordpress registered domain to github pages?

    - by Dave Castillo
    I'm trying to use a CNAME file in my git hub project to point to my wordpress registered domain name. E.g. example.com to point to myusername.github.com. I read Github-Setting up a custom domain and it tells me to point the domain to their servers: For a TLD like example.com, you should use an A record pointing to 204.232.175.78: But in the wordpress domain manager I only get the option to specify a nameserver like this: NS1.WORDPRESS.COM NS2.WORDPRESS.COM Inputting 204.232.175.78 in the wordpress manager give me an "invalid TLD." What am I doing wrong and how can I troubleshoot this issue?

    Read the article

  • New host, high load?

    - by dotancohen
    A few minutes ago I signed up at a new webhost. I have yet to move my sites over. Upon initial SSH connection, I checked the load and memory usage, they do seem rather higher than I would like: # uptime 12:06:51 up 71 days, 23:23, 1 user, load average: 9.02, 9.49, 9.45 # free total used free shared buffers cached Mem: 33014800 31927192 1087608 0 2384812 17729816 -/+ buffers/cache: 11812564 21202236 Swap: 16787916 8584 16779332 Is that a bit to packed? I'm only paying about $5 USD per month, so I don't expect <0.1 loads, but ~10 is worrisome. Is it not? Also, there is no /etc/issue file so I tried other methods to guess the OS: # uname -a Linux box358.bluehost.com 2.6.32-20120131.55.1.bh6.x86_64 #1 SMP Tue Jan 31 15:43:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux # which yum /usr/bin/yum # which apt-get # That looks like CentOS / RHEL 6.2 possibly?

    Read the article

  • Ideas for SVN/SQL/PHP/Linux Dev Enviroment Supporting Multiple Isolated Environments?

    - by jpganz18
    I am trying to create a "dev" for my users. In that environment they would access to their own account of PHPMyAdmin, SQL, Subversion and FTP which is not a big problem, but I would like to emulate like if each one would be in their own server. I mean so that they could change the PHP configuration (for example) and would be done only in its own environment. Any idea how to do this? Do I have to make something "special" at the installations of my server or something like that?

    Read the article

  • Ubuntu server has slow performance

    - by Rich
    I have a custom built Ubuntu 11.04 server with a 6 disk software RAID 10 primary drive. On it I'm primarily running a PostgreSQL and a few other utilities that stream data from the web. I often find after a few hours of uptime the server starts to lag with all kinds of processes. For example, it may take 10-15 seconds after log-in to get a shell prompt. It might take 5-10 seconds for top to come up. An ls might take a second or two. When I look at top there is almost no CPU usage. There's a fair amount of memory used by the PostgreSQL server but not enough to bleed into swap. I have no idea where to go from here, other than to suspect the RAID10 (I've only ever had software RAID 1's before). Edit: Output from top: top - 11:56:03 up 1:46, 3 users, load average: 0.89, 0.73, 0.72 Tasks: 119 total, 1 running, 118 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 93.5%id, 6.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16325596k total, 3478248k used, 12847348k free, 20880k buffers Swap: 19534176k total, 0k used, 19534176k free, 3041992k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1747 woodsp 20 0 109m 10m 4888 S 1 0.1 0:42.70 python 357 root 20 0 0 0 0 S 0 0.0 0:00.40 jbd2/sda3-8 1 root 20 0 24324 2284 1344 S 0 0.0 0:00.84 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.24 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 10 root 20 0 0 0 0 S 0 0.0 0:00.02 ksoftirqd/1 12 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/1 13 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/2 14 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/2:0 15 root 20 0 0 0 0 S 0 0.0 0:00.00 ksoftirqd/2 16 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/2 17 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/3 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/3:0 19 root 20 0 0 0 0 S 0 0.0 0:00.02 ksoftirqd/3 20 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/3 21 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset 22 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kdevtmpfs 24 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns 26 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers df -h rpsharp@ncp-skookum:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 1.8T 549G 1.2T 32% / udev 7.8G 4.0K 7.8G 1% /dev tmpfs 3.2G 492K 3.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.8G 0 7.8G 0% /run/shm /dev/sda2 952M 128K 952M 1% /boot/efi /dev/md0 5.5T 562G 4.7T 11% /usr/local free -m psharp@ncp-skookum:~$ free -m total used free shared buffers cached Mem: 15942 3409 12533 0 20 2983 -/+ buffers/cache: 405 15537 Swap: 19076 0 19076 tail -50 /var/log/syslog Jul 3 06:31:32 ncp-skookum rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="1070" x-info="http://www.rsyslog.com"] rsyslogd was HUPed Jul 3 06:39:01 ncp-skookum CRON[14211]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 06:40:01 ncp-skookum CRON[14223]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:00:01 ncp-skookum CRON[14328]: (woodsp) CMD (/home/woodsp/bin/mail_tweetupdate # email an update) Jul 3 07:00:01 ncp-skookum CRON[14327]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:00:28 ncp-skookum sendmail[14356]: q63E0SoZ014356: from=woodsp, size=2328, class=0, nrcpts=2, msgid=<[email protected]>, relay=woodsp@localhost Jul 3 07:00:29 ncp-skookum sm-mta[14357]: q63E0Si6014357: from=<[email protected]>, size=2569, class=0, nrcpts=2, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Jul 3 07:00:29 ncp-skookum sendmail[14356]: q63E0SoZ014356: to=Spencer Wood <[email protected]>,Martin Lacayo <[email protected]>, ctladdr=woodsp (1004/1005), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=62328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (q63E0Si6014357 Message accepted for delivery) Jul 3 07:00:29 ncp-skookum sm-mta[14359]: STARTTLS=client, relay=mx3.stanford.edu., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 Jul 3 07:00:29 ncp-skookum sm-mta[14359]: q63E0Si6014357: to=<[email protected]>,<[email protected]>, ctladdr=<[email protected]> (1004/1005), delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=152569, relay=mx3.stanford.edu. [171.67.219.73], dsn=2.0.0, stat=Sent (Ok: queued as 8F3505802AC) Jul 3 07:09:08 ncp-skookum CRON[14396]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 07:17:01 ncp-skookum CRON[14438]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 3 07:20:01 ncp-skookum CRON[14453]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:39:01 ncp-skookum CRON[14551]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 07:40:01 ncp-skookum CRON[14562]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:00:01 ncp-skookum CRON[14668]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:09:01 ncp-skookum CRON[14724]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 08:17:01 ncp-skookum CRON[14766]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 3 08:20:01 ncp-skookum CRON[14781]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:39:01 ncp-skookum CRON[14881]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 08:40:01 ncp-skookum CRON[14892]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Output of hdparm -t /dev/sd{a,b,c,d,e,f} This looks suspicious? /dev/sda: Timing buffered disk reads: 2 MB in 4.84 seconds = 423.39 kB/sec /dev/sdb: Timing buffered disk reads: 420 MB in 3.01 seconds = 139.74 MB/sec /dev/sdc: Timing buffered disk reads: 390 MB in 3.00 seconds = 129.87 MB/sec /dev/sdd: Timing buffered disk reads: 416 MB in 3.00 seconds = 138.51 MB/sec /dev/sde: Timing buffered disk reads: 422 MB in 3.00 seconds = 140.50 MB/sec /dev/sdf: Timing buffered disk reads: 416 MB in 3.01 seconds = 138.26 MB/sec

    Read the article

  • Why is my RapidSSL Certificate chain is not trusted on ubuntu?

    - by olouv
    I have a website that works perfectly with Chrome & other browser but i get some errors with PHP in CLI mode so i'm investigating it, running this: openssl s_client -showcerts -verify 32 -connect dev.carlipa-online.com:443 Quite suprisingly my HTTPS appears untrusted with a Verify return code: 27 (certificate not trusted) Here is the raw output : verify depth is 32 CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=20:unable to get local issuer certificate verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=27:certificate not trusted verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 So GeoTrust Global CA appears to be not trusted on the system (Ubuntu 11.10). Added Equifax_Secure_CA to try to solve this... But i get in this case Verify return code: 19 (self signed certificate in certificate chain) ! Raw output : verify depth is 32 CONNECTED(00000003) depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify error:num=19:self signed certificate in certificate chain verify return:1 depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 Edit Looks like my server does not trust/provide the Equifax Root CA, however i do correctly have the file in /usr/share/ca-certificates/mozilla/Equifax...

    Read the article

  • What benchmark tool to use to benchmark hardware for VM server?

    - by Mark0978
    We are setting up a new piece of hardware to virtualize several of our servers on. Choices are RAID 5, RAID 6, and RAID 0+1. We are wanting to benchmark all three before we go live with the machine, but I'm not sure how to test the speed. Since we will be using it to host VMs, what will the actual disk traffic look like? What can I use to see if RAID 6 is too slow? Short of setting up the system with all the VM's on it and running that way, then redoing on all the work, I'm not sure how to test it. It them becomes more of a subjective test than an objective one. I'm worried that RAID6 will have too much overhead, that RAID5 will be to fragile with 3TB drives and I've never worked with 0+1 at all. So in short I'd like to setup the base machine (which will be running Linux) and then test the underlying SW RAID for speed. What kind of tool exists to simulate this kind of load? Barring the lack of a specific tool, how about a generic FS testing tool that will simulate different loads?

    Read the article

  • Are spurious TCP connections on port 53 a problem?

    - by Darren Greaves
    I run a server which amongst other things uses tinydns for DNS and axfrdns for handling transfer requests from our secondary DNS (another system). I understand that tinydns uses port 53 on UDP and axfrdns uses port 53 on TCP. I've configured axfrdns to only allow connections from my agreed secondary host. I run logcheck to monitor my logs and every day I see spurious connections on port 53 (TCP) from seemingly random hosts. They usually turn out to be from ADSL connections. My question is; are these innocent requests or a security risk? I am happy to block repeat offenders using iptables but don't want to block innocent users of one of the websites I host. Thanks, Darren.

    Read the article

  • Chrome Saved Passwords security issue

    - by Nalaka526
    If I save a password in Google chrome someone else can easily see the password from Setting -> Advance Settings -> Manage saved passwords -> Show (in required password field) and use it from another computer. Isn't it causing a security issue? I guess it is much secure if it display only the entry and all dots or something for the password (not the actual password). Is there any acceptable idea of showing the password to anyone this easily? Note: Firefox is also showing the actual passwords, I Didn't check in IE.

    Read the article

  • What is the best memory supported by Asus Rampage Formula (1) motherboard

    - by James
    I'm rebuilding a PC from old components I have and I want to know what are the best ram sticks for the Asus Rampage Formula (The first motherboard in the Rampage Formula range) According to the Asus website (HERE) the maximum supported memory is 8GB but when I search through the memory compatibility list there's nothing above 2046mb and the motherboard only supports single or dual-channel memory. Can anyone point me in the direction of a more comprehensive list of compatible memory? Or does anyone know where I can ask about memory compatibility? Note: As far as I know, this model motherboard is no longer in production.

    Read the article

  • Windows and Linux applications to show cumulative uploads/downloads for each app, at a glance

    - by jontyc
    I've read few quite a few other threads on SU, but they have been focused on instantaneous/average bandwidths (B/sec) rather than cumulative download/upload totals for a period. Either that or they don't drill down to application level. Resource Monitor in Windows 7 only shows bandwidth. I've just been trying NetLimiter and whereas it can show total uploaded/downloaded, it's a case of having one stats window open per application, as opposed to a table showing all applications at once. Looking for applications for both Windows and Linux (Ubuntu), but they don't need to be the same.

    Read the article

  • Can a generic drum unit cause a Brother HL-2170w Laser B&W printer print dirty?

    - by marcamillion
    I tried a generic drum & generic toner - and it prints dirty. I also tried cleaning everything per the instructions in the documentation. Could it be a faulty drum unit? Or is it just a generic drum unit issue? I am about to buy a new drum unit, and don't want to buy generic if it will always cause this issue. Thoughts? P.S. I tried new toner cartridges (generic, but I know these work with a 'genuine drum' properly).

    Read the article

  • Restoring a Ubuntu Linux LVM drive

    - by user848106
    Scenario: I have a hard disk with a bootable sda1 partition and an Linux LVM. I seem to have corrupted the MBR? or Grub? not sure. I know it no longer boots and even with boot-repair and a Ubuntu live usb I was able to restore it. Following a "fix" with boot-repair I get "missing operating system" Can't i just reinstall grub or a new menu.lst? Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0007b68f Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 625141759 312320001 5 Extended /dev/sda5 501760 625141759 312320000 8e Linux LVM

    Read the article

  • Asus n61Ja notebook bios update

    - by zKs
    I wanted to update the bios, with an official bios update, from version 207 to 211. I didn't use winflash, I used easyflash in the bios. Everything seemed to be going okay; it deleted the old files, wrote the new ones, verified the new ones. Then it said: shutdown in/after 2 seconds and it shut down. Then nothing happened anymore. Power button completely unresponsive. The battery light was still on, and I'm not sure if I should've just waited... I didn't though, I thought I had to remove the battery, take the power off completely to be able to start it up again. So I'm wondering: what are my options here? My warranty expired and I don't really have the money to send it in and pay hundreds of bucks on repairs. Is there anything I can try? CMOS battery reset? Anything??? Please help me out! I would be very grateful :) ps. What was sort of odd btw, was that easyflash said something like that it was an unsigned bios and if I wanted to flash it anymore, if i wanted to 'force' it or something. It was with 100% certainty the correct update from the Asus.com support site, so I didn't take that 'warning' seriously.

    Read the article

  • "The directory name is invalid" when trying to install drivers in Windows 7 via Device Manager

    - by Luke
    First off, this computer is not mine, it's a customer's system. Having said that... The hard drive was moved to a new motherboard, CPU, RAM combo, and booted up fine. Customer puts in driver CD, drivers won't load. He brings it into me. Under Device Manager for Windows 7 x64, I see lots of PCI to PCI bridge, one SMBus Controller, and about 20 Unknown Devices. Greeeeeat... So I start with the SMBus driver directly from the Asus website for the motherboard (P8H77-M Pro). If I install from the setup program, it tells me to reboot, then it starts the install. It gets half way through the setup, then fails (An unknown error occurred. Setup will exit). When I try to point to the folder from Device Manager, it starts copying files for the driver, even presents me with the proper name of the device, but says that an error has occurred there as well: The directory name is invalid. Doing some Googling, I saw that many people had this issue with Vista. K, Vista and 7 are similar, maybe the solutions are the same... But they aren't. I tried: Copying the entire driver folder and setup utility to the Program Files folder and running it / selecting it in DM Downloading another set of drivers in case this one is corrupt Disabling UAC Deleting and recreating the %WINDIR%\TEMP folder Removing all references to previous hardware that I could find, even in Device Manager's hidden mode So far, nothing has worked. A wipe and reload will be out of the question

    Read the article

  • Old HP computer powers off after seconds, hard drive still spins. Power supply? Motherboard?

    - by Chase
    Someone called me saying their computer keeps shutting off. I brought it home to troubleshoot. It's an HP Pavilion 513c. The computer powers on normally but almost immediately powers off. Sometimes it manages to make it to a fully loaded desktop and other times it powers off during the boot process. The hard drive still spins but there is no video and the CPU fans quit spinning. The power has to be unplugged before you can turn it back on again. I'm thinking it's either a motherboard or power supply issue. I don't have a spare of either component laying around to test with. Any guess on which part is more likely to be the culprit? Thanks.

    Read the article

  • The fastest way to encode image+audio for Youtube from command line?

    - by Pavel Vlasov
    I have an mp3 and image and I want to make a simple clip to upload onto Youtube. Is there a fast solution? If video formats are so bad designed, then maybe it is possible to use a prerendered video-only clip? This works good except it takes as much time as the audio lasts: ffmpeg -loop_input -r ntsc -i "%IMAGE%" -i "%AUDIO%" -r 1 -acodec copy -shortest -re -force_fps "%VIDEO%" This takes a second but results in a black screen video that is successfully played by a desktop video player but not acceptable by Youtube: ffmpeg -i "%IMAGE%" -i "%AUDIO%" -acodec copy "%VIDEO%" Windows 7. Preserving audio quality is preferred over video quality.

    Read the article

  • Removable vs fixed mount points in Linux

    - by Dave
    What makes a mount point removable in Linux? I am using Gentoo Linux with Gnome 3.2, and I find it annoying that some of my drives (ex: /dev/sdb) appear as removable but not the others (ex: /dev/sdc, /dev/sdd). They are all in /etc/fstab, with the same options. They are all mounted properly at startup, they all work fine under my own folders /mnt/drive2 /mnt/drive3 /mnt/drive4. But only one of them (the first) appears in Nautilus (and in the Gnome 3 notification tray) as mountable/removable, not the others. Can I add options to my fstab to hide it? Or can I probe using udevadm or whatever? It looks strange to be able to remove/unmount fixed drives that I never need to unmount nor remove. Any pointer would be good, thanks.

    Read the article

  • Windows 7 won't read from NAS on LAN

    - by Alfy
    I've got a Linkstation NAS drive on a local network. Having just got a new laptop with Windows 7 Home Professional, I can no longer read anything of the drive. I've tried accessing the drive using \192.168.1.55\share, using ftp programs such as WinSCP, filezilla and even using firefox to hit ftp://192.168.1.55. The really annoying thing is that through these methods I can see the files on the drive, counting out any kind of connection issues. I can navigate through the NAS file system, but as soon as I try and copy a file off the NAS, things just stop working. Accessing the drive through a Windows XP machine works fine. So far I've tried: Disabling firewalls Adding the LmCompatibilityLevel key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Using the 40 - 56 bit encryption instead of the 128 bit. Has anyone got any suggestions of what I can check or try? This is driving me crazy and I'm totally out of ideas?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >