Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 711/903 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Azure's Ubuntu 12.0.4 fails to install PHP5

    - by Alex Kennberg
    Similar to this article from Azure themselves: http://www.windowsazure.com/en-us/manage/linux/common-tasks/install-lamp-stack/ I am trying to install PHP5 on Ubuntu 12.0.4 virtual machine. However, it fails installing the ssl-cert. $ sudo apt-get install php5 Reading package lists... Done Building dependency tree Reading state information... Done php5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up ssl-cert (1.0.28) ... Could not create certificate. Openssl output was: Generating a 2048 bit RSA private key ............................+++ ...................................................................................................................+++ writing new private key to '/etc/ssl/private/ssl-cert-snakeoil.key' ----- problems making Certificate Request 140320238503584:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=64 dpkg: error processing ssl-cert (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ssl-cert E: Sub-process /usr/bin/dpkg returned an error code (1) Any tips appreciated.

    Read the article

  • Access denied error 3221225578 with file sharing to Windows server

    - by Ian Boyd
    i'm trying to access the shares on a server. The credential box appears, and i enter in a correct username and password, and i get access denied. The silly thing is that i can Remote Desktop to the server (using the same credentials), and i can check the Security event log for the access denied errors: Event Type: Failure Audit Event Source: Security Event Category: Account Logon Event ID: 681 Date: 3/19/2011 Time: 11:54:39 PM User: NT AUTHORITY\SYSTEM Computer: STALWART Description: The logon to account: Administrator by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 from workstation: HARPAX failed. The error code was: 3221225578 and Event Type: Failure Audit Event Source: Security Event Category: Logon/Logoff Event ID: 529 Date: 3/19/2011 Time: 11:54:39 PM User: NT AUTHORITY\SYSTEM Computer: STALWART Description: Logon Failure: Reason: Unknown user name or bad password User Name: Administrator Domain: stalwart Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: HARPAX Looking up the error code (3221225578), i get an article on Technet: Audit Account Logon Events By Randy Franklin Smith ... Table 1 - Error Codes for Event ID 681 Error Code Reason for Logon Failure 3221225578 The username is correct, but the password is wrong. Which would seem to indicate that the username is correct, but the password is wrong. i've tried the password many times, uppercase, lowercase, on different user accounts, with and without prefixing the username with servername\username. What gives that i cannot access the server over file sharing, but i can access it over RDP?

    Read the article

  • What are some techniques to monitor multiple instances of a piece of software?

    - by Geo Ego
    It was recommended that I ask this question here by a member of StackOverflow. I have a piece of self-serve kiosk software that will be running at multiple sites. I'd like to monitor their status remotely. The kiosk application itself is pretty much finished. I am now in the process of creating a piece of software that will monitor all of the kiosks from a central location so that the customer can view particular details remotely (for instance, how many bills are in the acceptor's cash cartridge, what customer is currently logged in, etc.). Because I am in such an early stage of development, my options are quite open. I understand that I'm not giving very many qualifications, but I'd like to try to get a good variety of potential solutions. Some details: Kiosk software is a VB6 app running on Windows Embedded Monitoring software will be run on a modern desktop version of Windows (either XP, Vista, or 7) Database is SQL Server 2008 My initial idea was to develop a .NET app that would simply report the last database transaction for each kiosk at a set interval (say every second or so) but I'd really like for the kiosk software to report its status in real-time. I'm not exactly sure where to begin in terms of what modifications may need to be made to the kiosk software, and what the monitoring software will require. Links to articles on these topics would be most welcome.

    Read the article

  • Advanced Linux file permission question (ownership change during write operation)

    - by Kent
    By default the umask is 0022: usera@cmp$ touch somefile; ls -l total 0 -rw-r--r-- 1 usera usera 0 2009-09-22 22:30 somefile The directory /home/shared/ is meant for shared files and should be owned by root and the shared group. Files created here by usern (any user) are automatically owned by the shared group. There is a cron-job taking care of changing owning user and owning group (of any moved files) once per day: usera@cmp$ cat /etc/cron.daily/sharedscript #!/bin/bash chown -R root:shared /home/shared/ chmod -R 770 /home/shared/ I was writing a really large file to the shared directory. It had me (usera) as owning user and the shared group as group owner. During the write operation the cron job was run, and I still had no problem completing the write process. You see. I thought this would happen: I am writing the file. The file permissions and ownership data for the file looks like this: -rw-r--r-- usera shared The cron job kicks in! The chown line is processed and now the file is owned by the root user and the shared group. As the owning group only has read access to the file I get a file write error! Boom! End of story. Why did the operation succeed? A link to some kind of reference documentation to back up the reason would be very welcome (as I could use it to study more details).

    Read the article

  • guest crash on long backup via rsync

    - by ToreTrygg
    I recently upgraded host to Ubuntu 9.10 with vmware server 2.0.2, i had two guest machine. One is a sme server i had several crash during a session of backup with rsync to another pc. Normal activities run regularly. The other guest is up without problem since 25 days. I found in the log a lot o f row like these Dec 20 05:29:27.445: vcpu-1| VLANCE: Ethernet0 skipped 2560 time(s) Dec 20 05:29:27.445: vcpu-1| VLANCE: 66 12 5 8 2 3 3 0 1 0 0 1 0 1 2 0 Dec 20 05:29:27.445: vcpu-1| VLANCE: 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 2452 Dec 20 05:29:27.651: vmx| ide0:0: Command WRITE(10) took 1.947 seconds (ok) Dec 20 05:29:37.945: vmx| ide0:0: Command WRITE(10) took 1.033 seconds (ok) when the vitual machine crash the log report, I paste here only some part to limit the lenght of the message Dec 27 01:48:05.686: Worker#2| Caught signal 6 -- tid 700 Dec 27 01:48:05.686: Worker#2| SIGNAL: eip 0x460422 esp 0xb124c024 ebp 0xb124c03 Dec 27 01:48:05.712: Worker#2| SymBacktrace12 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.719: Worker#2| Unexpected signal: 6. Dec 27 01:48:05.720: Worker#2| Core dump limit is 0 KB. Dec 27 01:48:05.762: Worker#2| Child process 10455 failed to dump core (status 0 x6). Dec 27 01:48:05.762: Worker#2|SymBacktrace13 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.779: Worker#2|Msg_Post: Error Dec 27 01:48:05.780: Worker#2|http://msg.log.error.unrecoverable VMware Server unrecoverable error: (Worker#2) Dec 27 01:48:05.780: Worker#2|Unexpected signal: 6. I have no idea how to solve the problem with this installation, I think to dowgrade the host to a version more compatible with vmware server 2. I read a lot of post about difficult of installation I think the problem of compilation during install could be related to my problem. Excuse me if the post isn't very clear, it's my first post here. Any help or suggest will be appreciated. Thanks

    Read the article

  • Windows PE network setup

    - by microchasm
    I'm walking through the following step-by-step guide for deploying Windows 7 via AIK: http://technet.microsoft.com/en-us/library/dd349348%28WS.10%29.aspx On step 4 (Capturing the Installation onto a Network Share), I run into a bit of a snag: attempting to connect to a network drive repeatedly fails. I'm using/deploying Dell Optiplex 380 64 bit machines, and the network cards seem to be really wonky. On the machine that I'm using to run AIK etc, the network driver wasn't found automatically. I had to manually go in and install the driver (which was found on the OEM installation media). I've since copied this to the USB key that I'm using for the Autounattend.xml so its handy. I think that because of this, the PE environment doesn't or can't instantiate the network device. Is there a way to install/configure the network device through the command prompt in PE? If not, I read about adding in the answer file path(s) to drivers, but if I did it this way, would I have to start the process all over again (i.e. create new Autounattend.xml with the PnPcustomizations path included, re-run the installation on the reference machine, install all the applications, re-make the PE iso, reboot into new PE iso)? Any shortcuts, direction, or advice would be appreciated. Thanks!

    Read the article

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • Permissions Required for Sharepoint Backups

    - by Wyatt Barnett
    We are in the process of rolling out an extranet for some of our partners using WSS 3.0 as the platform. We already use it internally for a variety of things, and we are using the following powershell script to backup the server: param( $url="http://localhost", $backupFolder="c:\" ) [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site= new-Object Microsoft.SharePoint.SPSite($url) $names=$site.WebApplication.Sites.Names foreach ($name in $names) { $n2 = "" if ($name.Length -eq 0) { $n2="ROOT" } else { $n2 = $name } $tmp=$n2.Replace("/", "_") + ".sbk" $saveas = "" if ($backupFolder.Length -eq 0) { $saveas = $tmp } else { $saveas = join-path -path $backupFolder -childPath $tmp } $site.WebApplication.Sites.Backup($name, $saveas, "true") write-host "$n2 backed up to $saveas." } This script works perfectly on the current installation running as our domain backup user. On the new box, it fails when ran as the backup user--claiming "The web application located at http://extranet/" could not be found. That url does, in fact, work so I'm fairly certain it isn't anything that dumb and rather is some permissions issue. Especially because, when executed from my security context, the script works perfectly. I have tried making the backup user a farm owner, as well as added him to the various site collection admin groups on the extranet. The one major difference between the extranet and the intranet server is that the extranet has an alternative access mapping (for https://xnet.example.com) and also uses forms authentication for that mapping. Anyhow, what permissions (or other voodoo) do I need to setup to get this script to work properly?

    Read the article

  • What privilege level is required on a Windows client workstation on an ActiveDomain to break file lo

    - by Mike Burton
    I'm not sure if I should be asking this here or on StackOverflow, but here goes: I'm part of a team maintaining a document management application, and I'm trying to figure out Windows file locking permissions. We use a utility somebody downloaded years ago called psunlock to remotely close all locks on a file. We recently discovered that this does not work across different domains on our VPN. A little bit of digging lead me to the samba manual's discussion of file locking. I still don't really "get it", though. Does anyone have any insight to share into how the process of locking and breaking locks on files works in a network context? My thinking is that privileges are required both on the file appliance and on the client workstations which hold locks. Is that accurate? Can anyone give a more specific version? Ideally I'm looking for something along the lines of A user must have privilege level X in order to break locks held from a client workstation. In practice I'd be happy with a hotlink to a good white paper on the subject.

    Read the article

  • Red Hat server minimal install

    - by chmeee
    In a farm of virtualized Red Hat servers, there's the need to install a minimal system for security reasons. Minimal installs have serveral advantages (even no security related): Lees exposure to vulnerabilities (if you don't need it, don't install it) Better update process (less packages to update, less probability of breaking the system) Better performance (no unneeded daemons or processes) The less software you have the easier it is to harden the system Unfortunately, this is not easy because the "Minimal Installation" on Red Hat contains lots of unnecessary packages. There is an added challenge as the farm is running Oracle iAS. I've been told that iAS has dependencies with local graphical envieronment. So finally every server in the farm has gnome, X, etc. I've been searching the web and one solution seems to be making a kickstart script that will intall only the necessary packages. But I find this difficult and have several doubts about how to maintain the system dependencies afterwards. How do you install minimal Red Hat servers? Is it Ok to use kickstart or will I have dependency problems in the installation or in updates? Is there any way to avoid installing the graphical environment for iAS?

    Read the article

  • CentOs 6.3 can't install Git

    - by drivard
    I am trying to install git on an OpenVZ container using the CentOs 6.3 precreated template. When I try the command line yum install git I get the message: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com Setting up Install Process No package git available. Error: Nothing to do As I understand, the git package should be in the centos6 base repository: http://pkgs.org/centos-6-rhel-6/centos-rhel-x86_64/git-1.7.1-2.el6_0.1.x86_64.rpm.html But it doesn't find it, I even have the EPEL and RPMForge repo enabled, but still can't find the git package. yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com repo id repo name status base CentOS-6 - Base 4,776 epel Extra Packages for Enterprise Linux 6 - i386 6,523 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 4,501 updates CentOS-6 - Updates 596 vz-base vz-base 3 vz-updates vz-updates 0 repolist: 16,403 The weirdest thing is that my OpenVZ server is running on CentOs 6.3 and I was able to install git without any issue. Can you help me understanding why it doesn't find the package? Thank you in advance.

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • ifcf-ethx problem

    - by Shahmir Javaid
    Every time i run service networkd restart This is what i get Shutting down interface eth0: Device state: 3 (disconnected) [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown [ OK ] Bringing up loopback interface: Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown [ OK ] Bringing up interface eth0: ** (process:12951): WARNING **: fetch_connections_done: error fetching user connections: (2) The name org.freedesktop.NetworkManagerUserSettings was not provided by any .service files. Active connection state: activating Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/1 state: activated Connection activated [ OK ] Here is my ifcfg-eth0 # Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller DEVICE=eth0 BOOTPROTO=dhcp DEFROUTE=yes DHCPCLASS= HWADDR=xxx IPV4_FAILURE_FATAL=yes IPV6INIT=no ONBOOT=yes OPTIONS=layer2=1 PEERDNS=yes PEERROUTES=yes TYPE=Ethernet UUID=xxx And my ifcfg-eth1 # Intel Corporation 82541PI Gigabit Ethernet Controller DEVICE=eth1 HWADDR=xxx ONBOOT=no And my ifcfg-lo DEVICE=lo IPADDR=127.0.0.1 NETMASK=255.0.0.0 NETWORK=127.0.0.0 # If you're having problems with gated making 127.0.0.0/8 a martian, # you can change this to something else (255.255.255.255, for example) BROADCAST=127.255.255.255 ONBOOT=yes NAME=loopback Any ideas?

    Read the article

  • SkyDrive broken after upgrade to Windows 8.1: "This location can't be found, please try later"

    - by avo
    Upgrading from Windows 8 to Windows 8.1 via the Store upgrade path has screwed my SkyDrive. The C:\Users\<user name>\SkyDrive folder is empty (it only has single file desktop.ini). When I open the native (Store) SkyDrive app, I see "This location can't be found, please try later". I'm glad to still have my files alive online in my SkyDrive account. I tried disconneting from / reconnecting to my Microsoft Account with no luck. Anyone has an idea on how to fix this without reinstalling/refreshing Windows 8.1? From Event Viewer: Faulting application name: skydrive.exe, version: 6.3.9600.16412, time stamp: 0x5243d370 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0x00000000 Fault offset: 0x0000000000000000 Faulting process ID: 0x4e8 Faulting application start time: 0x01cece256589c7ee Faulting application path: C:\Windows\System32\skydrive.exe Faulting module path: unknown Report ID: {...} Faulting package full name: Faulting package-relative application ID: Also: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {C2F03A33-21F5-47FA-B4BB-156362A2F239} and APPID {316CDED5-E4AE-4B15-9113-7055D84DCC97} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19) from address LocalHost (Using LRPC) running in the application container Unavailable SID (Unavailable). This security permission can be modified using the Component Services administrative tool. Never was a big fan of in-place upgrade anyway, but this time it was a machine which I use for work, with a lot of stuff already installed on it. Shouldn't have tried to upgrade it in the first place, but was convinced Windows 8.1 is a solid update. Another lesson learnt.

    Read the article

  • No internet access when using static IP

    - by Endy Tjahjono
    I have just upgraded to Windows 8.1, and after the upgrade process is finished, I can't connect to internet. I tried running the "Troubleshoot problems": It concluded that DHCP needs to be activated: I let it activate DHCP, and I got internet connection back. The problem is I want to set this PC to a certain IP address (the IP address that it has been using all this time). I am also using Hyper-V, which I suspect has something to do with this problem. After I regained internet connection, I tried running one of my Hyper-V VM. From inside the VM I can connect to internet. That VM has static IP address. I also noticed that in "Control Panel\Network and Internet\Network Connections", I usually have a network connection called vEthernet (Realtek PCIe GBE Family Controller Virtual Switch). I didn't find it there after upgrade. How do I set my PC to a static IP while retaining internet access in Windows 8.1? EDIT I have managed to recreate vEthernet (Realtek PCIe GBE Family Controller Virtual Switch) by unchecking Allow management operating system to share this network adapter in Hyper-V's Virtual Switch Manager and then checking it again. But when I changed the adapter to use static IP, it still can't connect to internet. Result of Get-NetAdapter -Name * | fl (with MAC address removed): Name : vEthernet (Realtek PCIe GBE Family Controller Virtual Switch) InterfaceDescription : Hyper-V Virtual Ethernet Adapter #2 InterfaceIndex : 5 MacAddress : 55-55-55-55-55-55 MediaType : 802.3 PhysicalMediaType : Unspecified InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Mbps) : 100 MediaConnectionState : Connected ConnectorPresent : False DriverInformation : Driver Date 2006-06-21 Version 6.3.9600.16384 NDIS 6.40 Name : Ethernet 3 InterfaceDescription : Hyper-V Virtual Ethernet Adapter #3 InterfaceIndex : 6 MacAddress : 55-55-55-55-55-56 MediaType : 802.3 PhysicalMediaType : Unspecified InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Gbps) : 10 MediaConnectionState : Connected ConnectorPresent : False DriverInformation : Driver Date 2006-06-21 Version 6.3.9600.16384 NDIS 6.40 Name : Ethernet InterfaceDescription : Realtek PCIe GBE Family Controller InterfaceIndex : 2 MacAddress : 55-55-55-55-55-57 MediaType : 802.3 PhysicalMediaType : 802.3 InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Mbps) : 100 MediaConnectionState : Connected ConnectorPresent : True DriverInformation : Driver Date 2013-05-10 Version 8.1.510.2013 NDIS 6.30

    Read the article

  • Why won't my service start, and why doesn't upstart output any errors?

    - by Alex Waters
    I am trying to 'start gunicorn' as a service via upstart as user ale. I'm using gunicorn/flask on ubuntu 12.04 w/ init (upstart 1.5) Here is my /etc/init/gunicorn.conf setuid btw setgid flask script export HOME=/home/btw export WORKON_HOME=$HOME/.virtualenvs . $HOME/.virtualenvs/default/bin/activate cd $HOME/flask workon default gunicorn -c gunicorn.py bw:app end script It doesn't output anything other than gunicorn start/running, process 12992. If i then do 'status gunicorn' I get stop/waiting. any ideas on how to debug this? I tried following http://upstart.ubuntu.com/wiki/Debugging but it didn't help. If I do the following as user ale in the app's directory: 1. workon default 2. gunicorn -c gunicorn.py bw:app then Gunicorn runs fine. Here is ~/flask/gunicorn.py: bind = "0.0.0.0:8080" workers = 3 backlog = 2048 worker_class = "gevent" debug = True daemon = False pidfile ="/tmp/gunicorn.pid" log_level = "debug" accesslog = "/var/log/gunicorn/access.log" errorlog = "/var/log/gunicorn/error.log" user = "btw" group = "flask" Also, /var/log/error.log doesn't show anything new when I try to start the Gunicorn service. If I start it manually, it shows that the workers have been loaded, etc. Thanks for any help / suggestions!

    Read the article

  • postfix smtpd rejecting mail from outside network match_list_match: no match

    - by Loopo
    My postfix (V: 2.5.5-1.1) running on ubuntu server (9.04) started to reject mail arriving in from outside about 2 weeks ago. Doing a "manual" session via telnet shows that the connection is always closed after the MAIL FROM: [email protected] line is input, with the message "Connection closed by foreign host." Doing the same from another client inside the LAN works fine. In the log files I get the line "lost connection after MAIL from xxxxx.tld[xxx.xxx.xxx.xxx]" This is after some lines like: match_hostaddr: XXX.XXX.XXX.XXX ~? [::1]/128 match_hostname: XXXX.tld ~? 192.168.1.0/24 ... match_list_match: xxx.xxx.xxx.xxx: no match which seem to suggest some kind of filter which checks for allowed addresses. I have been unable to locate where this filter lives, or how to turn it off. I'm not even sure if that's what's causing my problem. Connections from inside the LAN don't get disconnected even though they also show a "match_list_match: ... no match" line. I didn't change any configuration files recently, below is my main.cf as it currently stands. I don't really know what all the parameters do and how they interact. I just set it up initially and it worked fine (up to recently). smtpd_banner = $myhostname ESMTP $mail_name (GNU) biff = no readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/server.crt smtpd_tls_key_file=/etc/ssl/private/server.key #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_sasl_auth_enable = no smtp_use_tls=no smtp_sasl_password_maps = hash:/etc/postfix/smtp_auth myhostname = XXXXXXX.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = XXXX.XXXX.com, XXXX.com, localhost.XXXXX.com, localhost relayhost = XXX.XXX.XXX.XXX mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.1.0/24 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_sasl_local_domain = #smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_ when checking the process list, postfix/smtpd runs as smtpd -n smtp -t inet -u -c -o stress -v -v Any clues?

    Read the article

  • Centos running Apache Tomcat keep getting "java.net.SocketException: Too many open files"

    - by Gerard Moroney
    We're running Apache Tomcat 7.0.41 on CentOS 6 with java version "1.7.0_21". We were getting a lot of too many open files errors so I did some research. The consensus was that it was to to with the number of open files. So I did the following: Increased max files in /etc/security/limits.conf soft nofile 100000 hard nofile 100000 Rebooted the server Checked the limits were valid for the user which was to run the process [app_admin@xxx ~]$ ulimit -Hn 100000 [app_admin@xxx ~]$ ulimit -Sn 100000 Monitored open files on the server using the lsof command What I observed was when the total open files reached circa 13000 and tomcat had around 4500 open files the error reappeared. I am confused. I thought it would have resolved the problem but clearly I don't fully understand the root cause and also how to set the parameter correctly. To (maybe) help I have not modified the server.xml file for Tomcat (although I'm tempted). I don't want to start fiddling with that and make things worse. I'm more than happy to share any more information if someone can give me some hints on where to start looking.

    Read the article

  • How to find the cause of locked user account in Windows AD domain

    - by Stephane
    After a recent incident with Outlook, I was wondering how I would most efficiently resolve the following problem: Assume a fairly typical small to medium sized AD infrastructure: several DCs, a number of internal servers and windows clients, several services using AD and LDAP for user authentication from within the DMZ (SMTP relay, VPN, Citrix, etc.) and several internal services all relying on AD for authentication (Exchange, SQL server, file and print servers, terminal services servers). You have full access to all systems but they are a bit too numerous (counting the clients) to check individually. Now assume that, for some unknown reason, one (or more) user account gets locked out due to password lockout policy every few minutes. What would be the best way to find the service/machine responsible for this ? Assuming the infrastructure is pure, standard Windows with no additional management tool and few changes from default is there any way the process of finding the cause of such lockout could be accelerated or improved ? What could be done to improve the resilient of the system against such an account lockout DOS ? Disabling account lockout is an obvious answer but then you run into the issue of users having way to easily exploitable passwords, even with complexity enforced.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Macbook Pro - 15" with i7 processor - Any problems with heat?

    - by webworm
    You may have already heard about the review done by the folks at PC Authority in Australia, where they had an i7 MacBook Pro that got up to 100 degrees Celsius during benchmarking. Here is the URL in case you have not read it. http://www.pcauthority.com.au/News/172791,macbook-pro-helps-core-i7-hit-100-degrees.aspx In any case, I was considering purchasing a 15" Macbook Pro with the i7 processor and the NVIDIA GeForce GT330M with 512 video memory. Having read how hot the computer got I started to become hesitant about purchasing. My main concern is long term damage to the computer due to excessive heat. I plan to use the MacBook Pro as a development machine where I will be running Windows 7 within VMWare Fusion or Virtual Box. Within the VM I will be running IIS, SQL Server, Visual Studio and SharePoint Server. Hence why I would like to have the power of the i7 processor. That is why I wanted to check with actually owners of the MacBooks with the i7 processor and see what their experiences have been. Have you noticed excessive heat? How does your Macbook handle process intensive apps over long periods of time? Thank you!

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • Can Xen be configured to dedicate only one port of a dual-port NIC to a domU?

    - by jamieb
    I'm using CentOS 5.4 on my dom0 with a stock Xen kernel. I'm attempting to use the pciback module to hide some of the Ethernet ports from the host and reserve them for a domU I intend to use for a firewall (process described here). However, when I launch the domU, I get the following error message: Using config file "/etc/xen/firewall". Error: pci: improper device assignment specified: pci: 0000:01:04.0 must be co-assigned to the same guest with 0000:01:06.0, but it is not owned by pciback. lspci gives me the following output: 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) 01:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) From the sound of the error message, it seems like I also need to dedicate eth0 (PCI ID 01:04.0) to the domU. Am I correct? If not, what am I doing wrong? Thanks!

    Read the article

  • installing Conkeror on Ubuntu 12.04

    - by Menelaos Perdikeas
    I am reading the instructions on conkeror site (and elsewhere) on how to install conkeror on Ubuntu (I am using Ubuntu 12_04 LTS) and it seems that the correct sequence is: sudo apt-add-repository ppa:xtaran/conkeror sudo apt-get update sudo apt-get install conkeror conkeror-spawn-process-helper The first step (apt-add-repository) seems to execute without a problem, giving the following output: You are about to add the following PPA to your system: Conkeror Debian packages for Ubuntu releases without xulrunner (i.e. for 11.04 Natty and later) More info: https://launchpad.net/~xtaran/+archive/conkeror Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret- keyring /tmp/tmp.Re7pWaDEQF --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv CB29CBE050EB1F371BAB6FE83BE0F86A6D689050 gpg: requesting key 6D689050 from hkp server keyserver.ubuntu.com gpg: key 6D689050: "Launchpad PPA for Axel Beckert" not changed gpg: Total number processed: 1 gpg: unchanged: 1 However, the apt-get update seems unable to fetch packages from the newly added PPA, with its output ending in: Hit http://security.ubuntu.com precise-security/restricted Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Err http://ppa.launchpad.net precise/main Sources 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en_US Err http://ppa.launchpad.net precise/main i386 Packages 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise /main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. Accordingly, apt-get-install conkeror fails with: mperdikeas@mperdikeas:~$ sudo apt-get install conkeror Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package conkeror Any ideas what might be wrong ?

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >