Daily Archives

Articles indexed Tuesday July 10 2012

Page 1/18 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • multiple-inheritance substitution

    - by Luigi
    I want to write a module (framework specific), that would wrap and extend Facebook PHP-sdk (https://github.com/facebook/php-sdk/). My problem is - how to organize classes, in a nice way. So getting into details - Facebook PHP-sdk consists of two classes: BaseFacebook - abstract class with all the stuff sdk does Facebook - extends BaseFacebook, and implements parent abstract persistance-related methods with default session usage Now I have some functionality to add: Facebook class substitution, integrated with framework session class shorthand methods, that run api calls, I use mostly (through BaseFacebook::api()), authorization methods, so i don't have to rewrite this logic every time, configuration, sucked up from framework classes, insted of passed as params caching, integrated with framework cache module I know something has gone very wrong, because I have too much inheritance that doesn't look very normal.Wrapping everything in one "complex extension" class also seems too much. I think I should have few working togheter classes - but i get into problems like: if cache class doesn't really extend and override BaseFacebook::api() method - shorthand and authentication classes won't be able to use the caching. Maybe some kind of a pattern would be right in here? How would you organize these classes and their dependencies? EDIT 04.07.2012 Bits of code, related to the topic: This is how the base class of Facebook PHP-sdk: abstract class BaseFacebook { // ... some methods public function api(/* polymorphic */) { // ... method, that makes api calls } public function getUser() { // ... tries to get user id from session } // ... other methods abstract protected function setPersistentData($key, $value); abstract protected function getPersistentData($key, $default = false); // ... few more abstract methods } Normaly Facebook class extends it, and impelements those abstract methods. I replaced it with my substitude - Facebook_Session class: class Facebook_Session extends BaseFacebook { protected function setPersistentData($key, $value) { // ... method body } protected function getPersistentData($key, $default = false) { // ... method body } // ... implementation of other abstract functions from BaseFacebook } Ok, then I extend this more with shorthand methods and configuration variables: class Facebook_Custom extends Facebook_Session { public funtion __construct() { // ... call parent's constructor with parameters from framework config } public function api_batch() { // ... a wrapper for parent's api() method return $this->api('/?batch=' . json_encode($calls), 'POST'); } public function redirect_to_auth_dialog() { // method body } // ... more methods like this, for common queries / authorization } I'm not sure, if this isn't too much for a single class ( authorization / shorthand methods / configuration). Then there comes another extending layer - cache: class Facebook_Cache extends Facebook_Custom { public function api() { $cache_file_identifier = $this->getUser(); if(/* cache_file_identifier is not null and found a valid file with cached query result */) { // return the result } else { try { // call Facebook_Custom::api, cache and return the result } catch(FacebookApiException $e) { // if Access Token is expired force refreshing it parent::redirect_to_auth_dialog(); } } } // .. some other stuff related to caching } Now this pretty much works. New instance of Facebook_Cache gives me all the functionality. Shorthand methods from Facebook_Custom use caching, because Facebook_Cache overwrited api() method. But here is what is bothering me: I think it's too much inheritance. It's all very tight coupled - like look how i had to specify 'Facebook_Custom::api' instead of 'parent:api', to avoid api() method loop on Facebook_Cache class extending. Overall mess and ugliness. So again, this works but I'm just asking about patterns / ways of doing this in a cleaner and smarter way.

    Read the article

  • What does the EC2 command line say when a machine won't start?

    - by OneSolitaryNoob
    When starting an instance on Amazon EC2, how would I detect a failure, for instance, if there's no machine available to fulfill my request? I'm using one of the less-common machine types and am concerned it won't start up, but am having trouble finding out what message to look for to detect this. I'm using the EC2 commandline tools to do this. I know I can look for 'running' when I do ec2-describe-instance to see if the machine is up, but don't know what to look for to see if the startup failed. Thanks!

    Read the article

  • WPF Application that only has a tray icon

    - by Michael Stum
    I am a total WPF newbie and wonder if anyone could give me some pointers how to write an application that starts minimized to tray. The idea is that it periodically fetches an RSS Feed and creates a Toaster-Popup when there are new feeds. The Application should still have a Main Window (essentially just a list containing all feed entries), but that should be hidden by default. I have started reading about XAML and WPF and I know that the StartupUri in the App.xaml has to point to my main window, but I have no idea what the proper way is to do the SysTray icon and hide the main window (this also means that when the user minimizes the window, it should minimize to tray, not to taskbar). Any hints?

    Read the article

  • Root Access: Don Dodge talks to 3 time founder Jennifer Reuting of DocRun

    Root Access: Don Dodge talks to 3 time founder Jennifer Reuting of DocRun Three time startup founder Jennifer Reuting, CEO of DocRun, and author of LLCs for Dummies, sits down with Don Dodge to talk startups. Jennifer started her first company at 17 from the ashes of a failed company. Jennifer is revolutionizing the legal docs business with DocRun. Inspiring interview. From: GoogleDevelopers Views: 258 12 ratings Time: 44:37 More in Science & Technology

    Read the article

  • Apache not running from rc.d on FreeBSD

    - by Oksana Molotova
    I'm using FreeBSD 8.3 and Apache 2.2. I didn't install Apache from ports, instead compiled it from source because I wanted to move the binary and configuration to a different path (I'm centering all of the major production daemons and their configurations in a single place). In any case, I based the /usr/local/etc/rc.d/apache22 file on one from a different server where it was installed from ports, I only modified the binary and config paths within. I can manually execute it with /usr/local/etc/rc.d/apache22 start, however even with apache22_enable="YES" in /etc/rc.conf it fails to start. All permissions and ownership are identical to the other server where it works. What am I missing and is there a way to debug this kind of thing?

    Read the article

  • Will restoring a cPanel account delete irrelevant mysql databases?

    - by user54625
    Long story short: I have many accounts in WHM, many domains. However, most of the MySQL databases for all the sites were created under one user, "admin" (let's say it's admin.com). I recently moved the admin.com account to another web server , and now I need to move it back. I would like to restore this account from a backup made by that web server. However, it only has 3 databases attached to that domain, while the server I'm moving it to has those 3 databases (old DBs that need to be replaced), PLUS all the other databases which are used by other accounts on the server. My question: if I restore the admin.com account, will it delete all the other databases on the account, or will it only replace the 3 DBs it has? I can't take any chances here to delete all those other DBs... If it WOULD delete all those other accounts, how would I go about restoring this account without losing the other data? An easy to way to resolve this situation would be to move the mysql owners to the proper accounts, but alas it doesn't seem that there is a way to do that via WHM.

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • Using hdparm for better performance on Web Servers

    - by Rishav
    I just heard about using hdparams to optimize the Hard Disk Performance of a server ? Is this common practice ? What file systems do you use ? I generally deploy on the second last release of Ubuntu for stability reasons, do you some other filesystems or use distributed file systems from the get go ? Do the hdparam settings change for different File systems ? I haven't tried this yet, so how much difference do changes like this make ?

    Read the article

  • Best City in Australia for Dedicated Hosting [closed]

    - by Brian Stinson
    We are looking to duplicate a copy of our database and application servers to serve our customers in Australia. We are looking for a well connected datacenter providing dedicated hosting (full machine rental) to take database updates and the like from our main site in Boston, MA. Which general location/city in Australia is best connected? East Coast? West Coast? If you have individual datacenter recommendations those are helpful as well.

    Read the article

  • Why does the IIS_IUSRS user need read permission on the ApplicationHost.config file in my Exchange 2010 OWA environment?

    - by CrabbyAdmin
    Previously I was experiencing issues where users in my Exchange 2010 environment would periodically receive an error stating: The custom error module does not recognize this error. However, following the advice of a thread I read somewhere, I granted the computername\IIS_IUSRS user Read permissions on the ApplicationHost.config file in the OWA environment. After making this change, the problem has not yet resurfaced. However, I'm not satisfied to simply settle for the fix, I'd like to understand why it worked. So can somebody please tell me why does this user need read permissions on this file and why would that have resolved the issue?

    Read the article

  • User cannot book a room resource in Exchange 2010

    - by bakesale
    Im having a problem with a specific user not being able to book a room resource. No error message is generated at all. The user attempts to add the room as a meeting resource and it simply doesnt register with the resources calendar. I have tested the resource with other accounts and other rooms for this user but it is only this room that is having issues for this user. This Exchange server was migrated from 2007 to 2010. All rooms except for the problem room were created back on Exchange 2007. This new room was created with Exchange 2010. Maybe this clue has something to do with the problem? This is a paste of the mailboxes folder permissions [PS] C:\Windows\system32Get-MailboxFolderPermission -identity room125:\calendar RunspaceId : 448ab44a-4c2a-4403-b46c-fce1106c0823 FolderName : Calendar User : Default AccessRights : {Author} Identity : Default IsValid : True RunspaceId : 448ab44a-4c2a-4403-b46c-fce1106c0823 FolderName : Calendar User : Anonymous AccessRights : {None} Identity : Anonymous IsValid : True

    Read the article

  • second ip address on the same interface but on a different subnet

    - by fptstl
    Is it possible in CentOS 5.7 64bit to have a second IP address on one interface (eg. eth0) - alias interface configuration - in a different subnet? Here is the original config for eth0 more etc/sysconfig/network-scripts/ifcfg-eth0 # Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.91.255 HWADDR=00:1D:09:FE:DA:04 IPADDR=192.168.91.250 NETMASK=255.255.255.0 NETWORK=192.168.91.0 ONBOOT=yes And here is the config for eth0:0 more etc/sysconfig/network-scripts/ifcfg-eth0:0 # Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express DEVICE=eth0:0 BOOTPROTO=static BROADCAST=10.10.191.255 DNS1=10.10.15.161 DNS2=10.10.18.36 GATEWAY=10.10.191.254 HWADDR=00:1D:09:FE:DA:04 IPADDR=10.10.191.210 NETMASK=255.255.255.0 NETWORK=10.39.191.0 ONPARENT=yes How would the resolv.conf file should change since there are two different gateways? Any other change needed?

    Read the article

  • MySQL: Pacemaker cannot start the failed master as a new slave?

    - by quanta
    I'm going to setup failover for MySQL replication (1 master and 1 slave) follow this guide: https://github.com/jayjanssen/Percona-Pacemaker-Resource-Agents/blob/master/doc/PRM-setup-guide.rst Here're the output of crm configure show: node serving-6192 \ attributes p_mysql_mysql_master_IP="192.168.6.192" node svr184R-638.localdomain \ attributes p_mysql_mysql_master_IP="192.168.6.38" primitive p_mysql ocf:percona:mysql \ params config="/etc/my.cnf" pid="/var/run/mysqld/mysqld.pid" socket="/var/lib/mysql/mysql.sock" replication_user="repl" replication_passwd="x" test_user="test_user" test_passwd="x" \ op monitor interval="5s" role="Master" OCF_CHECK_LEVEL="1" \ op monitor interval="2s" role="Slave" timeout="30s" OCF_CHECK_LEVEL="1" \ op start interval="0" timeout="120s" \ op stop interval="0" timeout="120s" primitive writer_vip ocf:heartbeat:IPaddr2 \ params ip="192.168.6.8" cidr_netmask="32" \ op monitor interval="10s" \ meta is-managed="true" ms ms_MySQL p_mysql \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally-unique="false" target-role="Master" is-managed="true" colocation writer_vip_on_master inf: writer_vip ms_MySQL:Master order ms_MySQL_promote_before_vip inf: ms_MySQL:promote writer_vip:start property $id="cib-bootstrap-options" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1341801689" property $id="mysql_replication" \ p_mysql_REPL_INFO="192.168.6.192|mysql-bin.000006|338" crm_mon: Last updated: Mon Jul 9 10:30:01 2012 Stack: openais Current DC: serving-6192 - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ serving-6192 svr184R-638.localdomain ] Master/Slave Set: ms_MySQL Masters: [ serving-6192 ] Slaves: [ svr184R-638.localdomain ] writer_vip (ocf::heartbeat:IPaddr2): Started serving-6192 Editing /etc/my.cnf on the serving-6192 of wrong syntax to test failover and it's working fine: svr184R-638.localdomain being promoted to become the master writer_vip switch to svr184R-638.localdomain Current state: Last updated: Mon Jul 9 10:35:57 2012 Stack: openais Current DC: serving-6192 - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ serving-6192 svr184R-638.localdomain ] Master/Slave Set: ms_MySQL Masters: [ svr184R-638.localdomain ] Stopped: [ p_mysql:0 ] writer_vip (ocf::heartbeat:IPaddr2): Started svr184R-638.localdomain Failed actions: p_mysql:0_monitor_5000 (node=serving-6192, call=15, rc=7, status=complete): not running p_mysql:0_demote_0 (node=serving-6192, call=22, rc=7, status=complete): not running p_mysql:0_start_0 (node=serving-6192, call=26, rc=-2, status=Timed Out): unknown exec error Remove the wrong syntax from /etc/my.cnf on serving-6192, and restart corosync, what I would like to see is serving-6192 was started as a new slave but it doesn't: Failed actions: p_mysql:0_start_0 (node=serving-6192, call=4, rc=1, status=complete): unknown error Here're snippet of the logs which I'm suspecting: Jul 09 10:46:32 serving-6192 lrmd: [7321]: info: rsc:p_mysql:0:4: start Jul 09 10:46:32 serving-6192 lrmd: [7321]: info: RA output: (p_mysql:0:start:stderr) Error performing operation: The object/attribute does not exist Jul 09 10:46:32 serving-6192 crm_attribute: [7420]: info: Invoked: /usr/sbin/crm_attribute -N serving-6192 -l reboot --name readable -v 0 The full logs: http://fpaste.org/AyOZ/ The strange thing is I can starting it manually: export OCF_ROOT=/usr/lib/ocf export OCF_RESKEY_config="/etc/my.cnf" export OCF_RESKEY_pid="/var/run/mysqld/mysqld.pid" export OCF_RESKEY_socket="/var/lib/mysql/mysql.sock" export OCF_RESKEY_replication_user="repl" export OCF_RESKEY_replication_passwd="x" export OCF_RESKEY_test_user="test_user" export OCF_RESKEY_test_passwd="x" sh -x /usr/lib/ocf/resource.d/percona/mysql start: http://fpaste.org/RVGh/ Did I make something wrong?

    Read the article

  • Data loss through permissions change?

    - by charliehorse55
    I seem to have deleted some files on my media drive, simply by changing the permissions. The Story I have many operating systems installed on my computer, and constantly switch between them. I bought a 1TB HD and formatted it as HFS+ (not journaled). It worked well between OSX and all of my linux installations while having much better metadata support than NTFS. I never synced the UIDs for my operating systems so the permissions were always doing funny things. Yesterday I tried to fix the permissions by first changing the UIDs of the other operating systems to match OSX, and then changing the file ownership of all files on the drive to match OSX. About 50% of the files on the drive were originally owned by OSX, the other half were owned by the various linux installations. I started to try and change the file permissions for the folders, and that's when it went south. The Commands These commands were run recursively on the one section of the drive. sudo chflags nouchg sudo chflags -N sudo chown myusername sudo chmod 666 sudo chgrp staff The Bad Sometime during the execution of these commands, all of the files belonging to OSX were deleted. If a folder had linux based files it would remain intact but any folder containing exclusively OSX files was erased. If a folder containing linux files also contained a subfolder with only OSX files, the sub folder would remain but is inaccesible and displays a file size of 0 bytes. Luckily these commands were only run on the videos folder, I also have a music folder with the same issue but I did not execute any of these commands on it. Effectively I have examples of the file permissions for all 3 states - the linux files before and after, and the OSX files before. OSX File Before -rw-r--r--@ 1 charliehorse 1000 3634241 15 Nov 2008 /path/to/file com.apple.FinderInfo 32 Linux File before: -rw-r--r--@ 1 charliehorse 1000 5321776 20 Sep 2002 /path/to/file/ com.apple.FinderInfo 32 Linux File After (Read only): (Different file, but I believe the same permissions originally) -rw-rw-rw-@ 1 charliehorse staff 366982610 17 Jun 2008 /path/to/file com.apple.FinderInfo 32 These files still exist so if there are any other commands to run on them to determine what has happened here, I can do that. EDIT Running ls on one of the "empty" deleted OSX folders yields this: ls: .: Permission denied ls: ..: Permission denied ls: subdirA: Permission denied ls: subdirB: Permission denied ls: subdirC: Permission denied ls: subdirD: Permission denied I believe my files might still be there, but the permissions are screwed.

    Read the article

  • How to know the maximum capacity of memory/RAM of a server? [closed]

    - by Nam G. VU
    I have a Dell PowerEdge T410 and have installed 16Gb RAM for it. I don't know if I can extend it to 32Gb, 64Gb, ... RAM on this server. As the description, it seems I have no such choice. So my question is, how to know exactly the maximum capacity of memory/RAM of a server? I need this information so as to verify the number by myself. ps. My server is not exactly the same as the description on Dell website - I indeed modify the configuration to meet my budget. Currently my server is my purchased server configuration

    Read the article

  • Need help making site available externally

    - by White Island
    I'm trying to open a hole in the firewall (ASA 5505, v8.2) to allow external access to a Web application. Via ASDM (6.3?), I've added the server as a Public Server, which creates a static NAT entry [I'm using the public IP that is assigned to 'dynamic NAT--outgoing' for the LAN, after confirming on the Cisco forums that it wouldn't bring everyone's access crashing down] and an incoming rule "any... public_ip... https... allow" but traffic is still not getting through. When I look at the log viewer, it says it's denied by access-group outside_access_in, implicit rule, which is "any any ip deny" I haven't had much experience with Cisco management. I can't see what I'm missing to allow this connection through, and I'm wondering if there's anything else special I have to add. I tried adding a rule (several variations) within that access-group to allow https to the server, but it never made a difference. Maybe I haven't found the right combination? :P I also made sure the Windows firewall is open on port 443, although I'm pretty sure the current problem is Cisco, because of the logs. :) Any ideas? If you need more information, please let me know. Thanks Edit: First of all, I had this backward. (Sorry) Traffic is being blocked by access-group "inside_access_out" which is what confused me in the first place. I guess I confused myself again in the midst of typing the question. Here, I believe, is the pertinent information. Please let me know what you see wrong. access-list acl_in extended permit tcp any host PUBLIC_IP eq https access-list acl_in extended permit icmp CS_WAN_IPs 255.255.255.240 any access-list acl_in remark Allow Vendor connections to LAN access-list acl_in extended permit tcp host Vendor any object-group RemoteDesktop access-list acl_in remark NetworkScanner scan-to-email incoming (from smtp.mail.microsoftonline.com to PCs) access-list acl_in extended permit object-group TCPUDP any object-group Scan-to-email host NetworkScanner object-group Scan-to-email access-list acl_out extended permit icmp any any access-list acl_out extended permit tcp any any access-list acl_out extended permit udp any any access-list SSLVPNSplitTunnel standard permit LAN_Subnet 255.255.255.0 access-list nonat extended permit ip VPN_Subnet 255.255.255.0 LAN_Subnet 255.255.255.0 access-list nonat extended permit ip LAN_Subnet 255.255.255.0 VPN_Subnet 255.255.255.0 access-list inside_access_out remark NetworkScanner Scan-to-email outgoing (from scanner to Internet) access-list inside_access_out extended permit object-group TCPUDP host NetworkScanner object-group Scan-to-email any object-group Scan-to-email access-list inside_access_out extended permit tcp any interface outside eq https static (inside,outside) PUBLIC_IP LOCAL_IP[server object] netmask 255.255.255.255 I wasn't sure if I needed to reverse that "static" entry, since I got my question mixed up... and also with that last access-list entry, I tried interface inside and outside - neither proved successful... and I wasn't sure about whether it should be www, since the site is running on https. I assumed it should only be https.

    Read the article

  • collectd does not work

    - by bery
    I have installed collectd-5.0.0 on Fedora12 server and would like to run its service for receiving data from clients. I have enabled network plugin and rddtool plugin as commented: collectd.conf in server: BaseDir "/opt/collectd/var/lib/collectd" LoadPlugin "logfile" LoadPlugin network LoadPlugin rrdtool <Plugin network> Listen "192.168.8.37" "25826" </Plugin> collectd.conf in client: LoadPlugin logfile LoadPlugin cpu LoadPlugin network LoadPlugin memory <Plugin network> Server"192.168.8.37" "25826" </Plugin> collectd.log in server: [2011-08-03 02:36:04] Exiting normally. [2011-08-03 02:36:04] rrdtool plugin: Shutting down the queue thread. [2011-08-03 02:36:04] network plugin: Stopping receive thread. [2011-08-03 02:36:04] network plugin: Stopping dispatch thread. [2011-08-03 02:37:11] Initialization complete, entering read-loop. collectd.log in client: [2011-08-02 17:31:44] Initialization complete, entering read-loop. results thst execute netstat on server: netstat -ulpn | grep 25826 udp 0 0 192.168.8.37:25826 0.0.0.0:* 4744/collectd problem: but there is noting in "/opt/collectd/var/lib/collectd/" on ser yes,I move the port number of "25826" as your propose(But I think this is the default port for coolectd).there is no rdd files recived on server. collectd.log in client collectd [2011-08-03 10:01:36] plugin_read_thread: Handling memory'. [2011-08-03 10:01:36] plugin_read_thread: Handlingcpu'. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = used; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = user; [2011-08-03 10:01:36] uc_update: uml/memory/memory-used: ds[0] = 280412160.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-user: ds[0] = 0.100008 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = buffered; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = nice; [2011-08-03 10:01:36] uc_update: uml/memory/memory-buffered: ds[0] = 344182784.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-nice: ds[0] = 0.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] network plugin: flush_buffer: send_buffer_fill = 1340 [2011-08-03 10:01:36] network plugin: network_send_buffer: buffer_len = 1340 ... [2011-08-03 10:01:36] plugin_read_thread: Next read of the cpu plugin at 1312380106.429064774. collectd.log in server collectd: [2011-08-03 20:18:08] type = network [2011-08-03 20:18:08] type = rrdtool [2011-08-03 20:18:08] network plugin: sockent_open: node = 192.168.8.37; service = 25826; [2011-08-03 20:18:08] fd = 3; calling bind' [2011-08-03 20:18:08] Done parsing/opt/collectd//share/collectd/types.db' [2011-08-03 20:18:08] interval_g = 10; [2011-08-03 20:18:08] timeout_g = 2; [2011-08-03 20:18:08] hostname_g = localhost.localdomain; [2011-08-03 20:18:08] Initialization complete, entering read-loop. It looks like, data is sending but doesn't be recived. Where is the mistake?

    Read the article

  • Hypervisor for mixed client and server OSes

    - by Mark
    I need to replace three old boxes I use for development, running Linux, Win Server and Win XP. Instead of purchasing three new boxes I am thinking of purchasing a single box and virtualizing the OSes. As it is for development, absolute performance is not a problem, but I want the Linux and Win servers to run continuously, while running Win 7 as if it is a regular PC. Therefore running Linux and Win Server on top off Win 7 is not an option. Is this a viable solution? Has anyone done this? What is performance like? I'd like to get decent graphics performance with Win 7, sufficient to run the occasional game. If so, I'm looking for suggestions or recommendations on which hypervisor or virtualization option to go for.

    Read the article

  • supervise apache with daemontools

    - by perlwle
    I am trying to setup daemontools for two apaches in one server. one apache 2.2 listening on port 80 proxy request to a second apache 1.3 listening on port 8888. ./run script as following: #!/bin/sh # apache 1.3 exec /apache_1_3/apache/bin/httpd -F #!/bin/sh # apache 2.2 exec /apache_2_2/apache/bin/httpd -D FOREGROUND daemontools monitors both apache fine. however, If I stop apache2.2 (using svc -t or apachectl), the apache 1.3 will see the following error in error_log [crit] (98)Address already in use: make_sock: could not bind to port 8888 I had to manually apachectl stop the apache1.3 to stop the error message clobber the log file. There is no such problem before using daemontools. any idea why this is happening?

    Read the article

  • Detect (and remove) non-ascii characters from files

    - by Mawg
    Something similar to this question, but that ended up with a programming answer. I am looking for a ready-made Windows application to recurse a directory tree and .. at the very least, notify me of all files which contain non-ascii characters It would be nice if it would also do any of the following (in no particular order) Let me specify which file extensions to scan Show the file contents, with the non-ascii characters highlighted Automagically remove such characters, or ... Let me launch an editor to remove them manually Know you of such a best? Thanks a 1,000,000 in advance

    Read the article

  • Tomato OS: "memory exhausted" running vi .... how to solve?

    - by Sam Jones
    I have set up tomato (shibby) on an asus RT-N66U router. It works great. I loaded up a few pieces, like transmission and optware. I can run vi, but when I run vi it fails with a "memory exhausted" error, and the terminal session hangs. For reference: If I simply start "vi" it runs fine. But if I specify vi I get the memory exhausted error, even if the file I am opening is just a couple of hundred bytes in size (like fstab). I discovered that my swap partition was not properly set up, so I did that. The swapon command now indicates I really do have a swap: [root@MyRouter samba]$ swapon -s Filename Type Size Used Priority /dev/sda1 partition 32900860 0 1 How can I get vi to work? Thanks! System setup reference information: asus RT-N66U router 2TB usb hard drive partitions on hard drive: Disk /dev/sda: 2000.4 GB, 2000398839808 bytes 255 heads, 63 sectors/track, 30400 cylinders Units = cylinders of 16065 * 4096 = 65802240 bytes Disk identifier: 0xfacbc8ab Device Boot Start End Blocks Id System /dev/sda1 1 512 32900868 82 Linux swap / Solaris /dev/sda2 513 29000 1830638880 83 Linux running samba memory: $ cat /proc/meminfo MemTotal: 255840 kB MemFree: 210980 kB Buffers: 5264 kB Cached: 22768 kB SwapCached: 0 kB Active: 20272 kB Inactive: 11448 kB HighTotal: 131072 kB HighFree: 99868 kB LowTotal: 124768 kB LowFree: 111112 kB SwapTotal: 32900860 kB SwapFree: 32900860 kB Dirty: 0 kB Writeback: 0 kB TIA!

    Read the article

  • re-use '~/.profile` for Fish?

    - by Albert
    (I'm talking about the shell Fish, esp. Fish's Fish.) For Bash/ZSH, I had ~/.profile with some exports, aliases and other stuff. I don't want to have a separate config for environment variables for Fish, I want to re-use my ~/.profile. How? In The FAQ, it is stated that I can at least import those via /usr/local/share/fish/tools/import_bash_settings.py, however I don't really like running that for each Fish-instance.

    Read the article

  • way to access Win networked folder on Mac like one can with Run box on Win

    - by Nik
    I was wondering if there is a way to access a deeply nested shared networked Windows folder on Mac (Lion) like you can on windows with the Run box. say I know the folder I want is on the computer Homer-PC, the G drive, the folder ABC/234/CDE on Windows I can Win+R then \Homer-PC\G\ABC\234\CDE return and it should open that folder directly, Is there something similar on the Mac? Or is there some tiny apps that I can use to do that? Thank You

    Read the article

  • Why do password entries over ssh take so long?

    - by Dean
    When I'm ssh'd into my server, any time I enter my password, there's a 40 second delay before the server responds. This occurs when logging in, as well as whenever I run a command via sudo. The delay does not happen when I run su and enter my password however. Using the -v flag for ssh doesn't show anything during this time. Looking at Wireshark, all traffic between the two machines stops while this is happening. Any idea what's happening, or advice on how to investigate this? The server is running Debian squeeze (6.0.4)

    Read the article

  • How do I configure custom routes when an interface is configured?

    - by ManicDee
    Other Superuser questions have addressed the issue of adding custom routes to access e.g.: multiple networks of a corporate network through one interface, while accessing the Internet through another interface. So assuming that I have a script to add specific routes when en0 is configured, and a separate script to add specific routes when en1 is configured, is there some way I can trigger those scripts to run automatically when Mac OS X/Darwin starts and configures those interfaces? Back in my Linux days, it was possible to add an option in /etc/network/interfaces along the lines of: iface eth0 inet dhcp up /usr/local/sbin/eth0-routes-up Is there something similar for Mac OS X?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >