Search Results

Search found 33139 results on 1326 pages for 'embedded database'.

Page 719/1326 | < Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • how to initialize two logical drives on a HP P400i controller without reboot

    - by John
    What I am trying to do is initialize two logical drives on a HP P400i embedded controller without a reboot of the system here my current Array config: array A (SAS, Unused Space: 0 MB) logicaldrive 1 (17.9 GB, RAID 5, OK) logicaldrive 2 (17.9 GB, RAID 5, OK) logicaldrive 3 (75.9 GB, RAID 5, OK) logicaldrive 4 (25.0 GB, RAID 5, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 72 GB, OK) array B (SAS, Unused Space: 0 MB) logicaldrive 5 (99 MB, RAID 0, OK) logicaldrive 6 (68.2 GB, RAID 0, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 72 GB, OK) windows 2003 machine running the HpCISs2.sys driver version 6.20.0.32 . I have the ACU and ACU CLI tools installed version 8.28.13.0, P400i firmware version 2.74 . Now what I'd like to do is removes the physical drive 1I:1:4 and delete the two logical drives in array B. then insert a new drive in to bay 4 that contains two new logical drives and have them show up in array B again. So far after I remove the drive and delete the failed logical drives, I insert the new drive and run HPacucli rescan. I get the new drive to show up as unassinged physical drive but I cant figure out now to "for lack of a better word" mount the 2 logical drives on the new unassinged disk. If I reboot the system the array controller picks up the new fourth drive and creates Array B with the drives without problem but I'd really like to not have to reboot the server. Any ideas?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • apache2 mysql authentication module and SHA1 encryption

    - by Luca Rossi
    I found myself in a setup on where I need to enable some authentication method using mysql. I already have an user scheme. That user scheme is working like a charm with MD5 password and CRYPT, but when I turn to SHA1sum it says: [Fri Oct 26 00:03:20 2012] [error] Unsupported encryption type: Sha1sum No useful debug informations on log files. This is my setup and some info: debian6 apache and ssl installed packages: root@sistemichiocciola:/etc/apache2/mods-available# dpkg --list | grep apache ii apache2 2.2.16-6+squeeze8 Apache HTTP Server metapackage ii apache2-mpm-prefork 2.2.16-6+squeeze8 Apache HTTP Server - traditional non-threaded model ii apache2-utils 2.2.16-6+squeeze8 utility programs for webservers ii apache2.2-bin 2.2.16-6+squeeze8 Apache HTTP Server common binary files ii apache2.2-common 2.2.16-6+squeeze8 Apache HTTP Server common files ii libapache2-mod-auth-mysql 4.3.9-13+b1 Apache 2 module for MySQL authentication ii libapache2-mod-php5 5.3.3-7+squeeze14 server-side, HTML-embedded scripting language (Apache 2 module) root@sistemichiocciola:/etc/apache2/sites-enabled# dpkg --list | grep ssl ii libssl-dev 0.9.8o-4squeeze13 SSL development libraries, header files and documentation ii libssl0.9.8 0.9.8o-4squeeze13 SSL shared libraries ii openssl 0.9.8o-4squeeze13 Secure Socket Layer (SSL) binary and related cryptographic tools ii openssl-blacklist 0.5-2 list of blacklisted OpenSSL RSA keys ii ssl-cert 1.0.28 simple debconf wrapper for OpenSSL my vhost setup: AuthMySQL On Auth_MySQL_Host localhost Auth_MySQL_User XXX Auth_MySQL_Password YYY Auth_MySQL_DB users AuthName "Sistemi Chiocciola Sezione Informatica" AuthType Basic # require valid-user require group informatica Auth_MySQL_Encryption_Types Crypt Sha1sum AuthBasicAuthoritative Off AuthUserFile /dev/null Auth_MySQL_Password_Table users Auth_MYSQL_username_field email Auth_MYSQL_password_field password AuthMySQL_Empty_Passwords Off AuthMySQL_Group_Table http_groups Auth_MySQL_Group_Field user_group Have I missed a package/configuration or something?

    Read the article

  • Repository bugzilla package changed to bugzilla3 in Lenny; upgradable?

    - by Pukku
    This question was asked in debianhelp.org almost half a year ago, but never got an answer. I wasn't the one who posted it, however I was today facing exactly the same question. Not sure if copying it to here as such is considered as inappropriate or something, but there's not really anything that I would even like to paraphrase... So let's just go. (I'm sure you will be happy to close it, if this is not the way to go :) Hello all! We are using a Bugzilla server install on a Debian 4/Etch server and are starting to look at the upgrade to Debian 5/Lenny. I was hoping to upgrade the existing Bugzilla server and database from the oldstable (v2.22) to the newer stable in Lenny (v3) when we get to doing a dist-upgrade. However from testing in a virtual machine it seems that the old package was called "Bugzilla" whereas the Lenny package is called "Bugzilla3" and I could not figure a way to directly upgrade between the two. Is it possible to establish some kind of upgrade path quickly after the dist-upgrade to minimise downtime using apt-get or aptitude? Going on past experiences I would not want to do a fresh install with the Bugzilla3 package and attempt to inject the old database into it (previous attempts failed miserably!) :(

    Read the article

  • Shared firewall or multiple client specific firewalls?

    - by Tauren
    I'm trying to determine if I can use a single firewall for my entire network, including customer servers, or if each customer should have their own firewall. I've found that many hosting companies require each client with a cluster of servers to have their own firewall. If you need a web node and a database node, you also have to get a firewall, and pay another monthly fee for it. I have colo space with several KVM virtualization servers hosting VPS services to many different customers. Each KVM host is running a software iptables firewall that only allows specific ports to be accessed on each VPS. I can control which ports any given VPS has open, allowing a web VPS to be accessed from anywhere on ports 80 and 443, but blocking a database VPS completely to the outside and only allowing a certain other VPS to access it. The configuration works well for my current needs. Note that there is not a hardware firewall protecting the virtualization hosts in place at this time. However, the KVM hosts only have port 22 open, are running nothing except KVM and SSH, and even port 22 cannot be accessed except for inside the netblock. I'm looking at possibly rethinking my network now that I have a client who needs to transition from a single VPS onto two dedicated servers (one web and one DB). A different customer already has a single dedicated server that is not behind any firewall except iptables running on the system. Should I require that each dedicated server customer have their own dedicated firewall? Or can I utilize a single network-wide firewall for multiple customer clusters? I'm familiar with iptables, and am currently thinking I'll use it for any firewalls/routers that I need. But I don't necessarily want to use up 1U of space in my rack for each firewall, nor the power consumption each firewall server will take. So I'm considering a hardware firewall. Any suggestions on what is a good approach?

    Read the article

  • TFS 2012: Backup Plan Fails with empty log file

    - by Vitor
    I have a Team Foundation Server 2012 installation with Power Tools, and I defined a backup plan using the wizard found in the "Database Backup Tools" in the Team Foundation Server Administration Console. I set the backup plan to do a full database backup on Sunday mornings, to another server in the network. I followed the wizard with no problems and the Backup Plan was set successfully. However when the backup runs it returns Error as result and when I go to the log file I only get the header and no further info: [Info @01:00:01.078] ==================================================================== [Info @01:00:01.078] Team Foundation Server Administration Log [Info @01:00:01.078] Version : 11.0.50727.1 [Info @01:00:01.078] DateTime : 11/25/2012 02:00:01 [Info @01:00:01.078] Type : Full Backup Activity [Info @01:00:01.078] User : <backup user> [Info @01:00:01.078] Machine : <TFS Server> [Info @01:00:01.078] System : Microsoft Windows NT 6.2.9200.0 (AMD64) [Info @01:00:01.078] ==================================================================== I can imagine it's a permission problem, but I have no idea where to start ... Can anyone help? Thank you for your time! EDIT I'm not sure if it is related, but I logged in with "backup user" in "TFS Server" and there was this crash window opened with "TFS Power Tool Shell Extension (TfsComProviderSvr) has stopped working". The full crash log is here: Problem signature: Problem Event Name: APPCRASH Application Name: TfsComProviderSvr.exe Application Version: 11.0.50727.0 Application Timestamp: 5050cd2a Fault Module Name: StackHash_e8da Fault Module Version: 6.2.9200.16420 Fault Module Timestamp: 505aaa82 Exception Code: c0000374 Exception Offset: PCH_72_FROM_ntdll+0x00040DA8 OS Version: 6.2.9200.2.0.0.272.7 Locale ID: 1043 Additional Information 1: e8da Additional Information 2: e8dac447e1089515a72386afa6746972 Additional Information 3: d903 Additional Information 4: d9036f986c69f4492a70e4cf004fb44d Does it help? Thanks everyone!

    Read the article

  • PDU management interface has low availability - product flaw or isolated issue

    - by DeanB
    Our colocation provider has supplied us with APC AP7932 switched 0U PDUs as part of several cabinets they provide us. We have had a lot of trouble with the network management aspect of these PDUs, which I'll describe below. We are moving to cage space in the same datacenter, and plan to provide our own PDUs, so I'd like to determine which enterprise-grade PDUs have been reliable performers from a remote management perspective. Our colo-provided PDUs are configured to support management via an SSL web UI and via telnet. We updated the firmware on all of them to the current version as of NOV2011. They respond to pings reliably, and we have no reason to suspect a network layer issue. However, we experience frequent hangs, timeouts, disconnects, and general unavailability from the embedded management host in all of the PDUs. We occasionally have to restart the microcontroller on the PDU to recover from what appears to be an occasional hard fault. The outlets stay powered (thankfully), but the management aspect is so unreliable that it has become an ops liability - we can't be confident that we could get into the PDU to power cycle a host if we needed to. We have 3 PDUs that all exhibit identical behavior. There are many manufacturers of enterprise-grade 0U switched PDUs, all with comparable features. If I looked at the datasheet for our current PDUs, they would appear to be a good fit -- only with the benefit of suffering through using them do we know to avoid them. I'd like to avoid picking a PDU that looks fine on paper, but has similar reliability issues. What has been others' experience with switched PDUs? Is this level of flakiness normal?

    Read the article

  • Apache 2 Fails to Start After Upgrade with No Errors

    - by Mark Davidson
    Hi all Hoping someone can help me with a server issue. Recently we upgraded to the latest apache on 2 boxes within are organisation. One being the master box the other being for failover. The upgrade went fine on the master box but on the failover box apache fails to start with no errors, being output or logged. Both boxes have the exact same configuration so found this a bit strange. I've reinstalled apache and have been through checking the configs and did not find any obvious errors. Eventally I ran a syntax check on each config file being included and found that one of the files apparently has syntax errors. Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration Invalid command 'php_value', perhaps misspelled or defined by a module not included in the server configuration Invalid command 'GeoIPEnable', perhaps misspelled or defined by a module not included in the server configuration I've trippled checked all the modules are enabled but it still fails. I've googled the subject of these errors loads but have been unable to fine a solution. I was wondering if anyone had encountered such a problem before and could point me towards a solution. Thanks for your help in advance. P.s: Apache related versions on server. ii apache2 2.2.3-4+etch10 Next generation, scalable, extendable web se ii apache2-mpm-prefork 2.2.3-4+etch10 Traditional model for Apache HTTPD 2.1 ii apache2-utils 2.2.3-4+etch10 utility programs for webservers ii apache2.2-common 2.2.3-4+etch10 Next generation, scalable, extendable web se ii libapache2-mod-geoip 1.1.8-2 GeoIP support for apache2 ii libapache2-mod-php5 5.2.0+dfsg-8+etch15 server-side, HTML-embedded scripting languag

    Read the article

  • httpd 2.2.15 + suPHP + suExec + php5 = permission and information ?

    - by Prix
    Hi, i am currently playing around with suexec, suphp, php5 on my apache on slackware 13.1. Everything is installed and working properly but now i did like to got further into the directory permissions and at suphp settings and options available. initially i was planning to leave suphp disabled unless a virtualhost has it specified to be enabled but it does not seem to work, see sample: mod_php.conf which is included in my httpd.conf # # mod_php & mod_suPHP - PHP Hypertext Preprocessor module # # Load the PHP module: LoadModule php5_module lib/httpd/modules/libphp5.so # Load the suPHP module: LoadModule suphp_module lib/httpd/modules/mod_suphp.so <IfModule mod_php5.c> # Tell Apache to feed all *.php files through PHP. If you'd like to # parse PHP embedded in files with different extensions, comment out # these lines and see the example below. <FilesMatch \.php$> SetHandler application/x-httpd-php </FilesMatch> </IfModule> <IfModule mod_suphp.c> # This option tells mod_suphp if a PHP-script requested on this server (or # VirtualHost) should be run with the PHP-interpreter or returned to the # browser "as it is". suPHP_Engine off </IfModule> With the above first sample it makes suPHP and PHP not work if i comment out the php5 stuff but the module it will run just fine ... So my first question is, how could i possible make this setup work ? Leave suPHP disabled using php5 by default and if a virtualhost has suPHP enabled it will disable php5 and use suPHP. if any information is lacked here please let me know and i will update with any additional information you may need. Thanks in advance.

    Read the article

  • SSO "Portal"

    - by Clinton Blackmore
    Pursuant to my question on alleviating the password explosion, I've contacted some of the services to whom we are paying money to access their websites to ask if we could authenticate our own users, and some of them said yes and send me specs on how to do so. (One of the sites called such a system a page a "portal"; I've never heard the term used in quite that way.) It is simple enough that I am tempted to roll my own. The largest complication is that one site wants us to store a key for every user in our database (and I think the LDAP database makes sense) after their initial login. So, non-trivial, but doable. The nature of these sorts of tasks, I expect, is that if they start out small and simple, they don't end that way. There must be some software that addresses this that is readily extended, surely. In my searching, I've come across: SimpleSAMLphp JOSSO RubyCAS-Server Shibboleth Pubcookie OpenID [Wow, gee. I'd missed some of those in my previous searches! The wikipedia page on Central Authentication Services is useful, and the section on Alternatives to OpenID makes it look like there is a lot of choice.] Can anyone recommend any of these, or suggest ones to avoid? Internally, we are authenticating using Apple's Open Directory [ == OpenLDAP + Kerberos + Password Server (which, I believe, == SAML) ]. As far as extending/tweaking/advanced configuration of a system, I am able to program in Python, C++, can do some basic PHP, and may be able to remember some Java. Looks like I need to pick up Ruby at some point. Addendum: I would also like users to be able to change their passwords over the web (and for certain users to change passwords of other users).

    Read the article

  • Team Foundation Server 2008 - TF220056 Error during installation

    - by David
    I'm attempting to install Team Foundation Server 2008 on a Windows Server 2003 instance that exists under Hyper-V. The SQL Server database itself is held on the root partition of the Hyper-V server and has the Reporting Services installed (so I've solved the TF220059 error already). After hitting "Next " after typing the name of the SQL Server I get this error: --------------------------- Microsoft Visual Studio 2008 Team Foundation Server Setup --------------------------- TF220056: An unrecoverable error occurred while trying to check the status of the Team Foundation database. Installation cannot continue. Check the install log for more details. --------------------------- OK --------------------------- The error log's stack trace makes it look like a bug in the TFS installer itself: [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: System.IO.IOException: The directory name is invalid. [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at System.IO.Path.GetTempFileName() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.get_Log() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.Run() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.CommandLine.RunCommand(String[] args) [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: The directory name is invalid. [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe check failed with error code: 100 I'm running the installer as the domain Administrator, although the server is a Terminal Server in Application Mode, might that be the cause of the problems?

    Read the article

  • Xen domU passwd file overwritten with console log output

    - by malfy
    I was setting up a Debian Xen domU and after booting it fine, I added basic configuration to /etc/network/interfaces and ran /etc/init.d/networking restart. This failed so I decided to reboot. After the reboot I also ran xm shutdown box. When dropped to a shell prompt it wouldn't let me login. Upon further inspection, I now have garbage in some critical files in /etc: root@box:/# tail +1 mnt/etc/{passwd-,shadow} tail: cannot open `+1' for reading: No such file or directory ==> mnt/etc/passwd- <== 0000000000100000 (reserved) Nov 23 02:02:39 box kernel: [ 0.000000] Xen: 0000000000100000 - 0000000004000000 (usable) Nov 23 02:02:39 box kernel: [ 0.000000] DMI not present or invalid. Nov 23 02:02:39 box kernel: [ 0.000000] last_pfn = 0x4000 max_arch_pfn = 0x1000000 Nov 23 02:02:39 box kernel: [ 0.000000] initial memory mapped : 0 - 033ff000 Nov 23 02:02:39 box kernel: [ 0.000000] init_memory_mapping: 0000000000000000-0000000004000000 Nov 23 02:02:39 box kernel: [ 0.000000] NX (Execute Disable) protection: active Nov 23 02:02:39 box kernel: [ 0.000000] 0000000000 - 0004000000 page 4k Nov 23 02:02:39 box kernel: [ 0.000000] kernel direct mapping tables up to 4000000 @ 7000-2c000 Nov 23 02:02:3 ==> mnt/etc/shadow <== 32 nr_cpumask_bits:32 nr_cpu_ids:1 nr_node_ids:1 Nov 23 02:02:39 box kernel: [ 0.000000] PERCPU: Embedded 15 pages/cpu @c15b0000 s37688 r0 d23752 u65536 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: s37688 r0 d23752 u65536 alloc=16*4096 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: [0] 0 Nov 23 02:02:39 box kernel: [ 0.000000] Xen: using vcpu_info placement Nov 23 02:02:39 box kernel: [ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16160 Nov 23 02:02:39 box kernel: [ 0.000000] Kernel command line: root=/dev/mapper/xen-guest_root ro quiet root=/dev/xvda1 ro Nov 23 02:02:39 box kernel: [ 0.000000] PID hash table entries: The garbage is also present in the passwd file and the group file (although I didn't paste that above since I have since ran debootstrap on the filesystem again). Does anyone have any insight into what happened and why?

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

  • Mail to Mailenable group sometimes bounces back because it cannot find one of the group members

    - by Stanley
    I am using MailEnable Enterprise Edition 6.53. On one of my PostOffices I have a group address set up with 5 members in the group. Under normal circumstances when someone sends a mail to this group then the mail gets successfully forwarded to all 5 members in the group. However, every now and then when a recipient sends to the group they get a bounce back message from us that the mail could not be delivered because one of the user's addresses was not found on the server. This could happen for any one of the 5 users in the group. The error message looks like: Message could not be delivered. Error was: The email address is not available on this system The following recipient(s) could not be reached: [SF:mydomain.com/my_username] What could be causing this and how do I fix it? Obviously the mailbox DOES exist as, under normal circumstances, when someone mails the group the mail gets delivered successfully and forwarded to all group members. All the mailboxes most definitely DO exist on the system. EDIT: I have done more investigation and I see an error in Windows Event Viewer: MailEnable Database Provider error: 0 FAILURE: (SQLDriverConnect), Module: MEIMAPS.exe; Error: [Microsoft][ODBC SQL Server Driver]Timeout expired This seems to be related to http://forum.mailenable.com/viewtopic.php?p=53157 and http://www.mailenable.com/kb/content/view.asp?ID=ME020535 but the setups in these links use MySQL as a database whereas we use SQL Server. Maybe this shines some new light on the problem?

    Read the article

  • Dynamic subdomain routing

    - by Nader
    Hi everyone, I asked this question over at stackoverflow, but got very few views: http://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain Perhaps it's more applicable to this crowd. Here it is again for convenience: I have a platform where a user can create a new website using a subdomain. There will be thousands of these, eg abc.mydomain.com, def.mydomain.com . Hopefully if we are successful hundreds of thousands. I need to be able to route these domains to a different IPs to point at a particular app server. I have this mapping in a database right now. What are the best practices and recommended technologies here? I see a couple options: Have DNS setup with a wildcard CNAME entry so that all requests go to a single IP where perhaps two machines using heartbeat (for failover) know how to look up the IP in the database and then do an http redirect to the appropriate app server. This seems clunky and slow to me. Run my own DNS server that can be programatically managed such that when a new site is created a DNS entry is added. We also move sites around to different app servers, so I would need to be able to update DNS entries in close to real time. Thoughts anyone? Thanks. Update2: I've setup external wildcard DNS pointing at an HAProxy web server whose job it is to route requests to backend servers. The mapping is stored in our internal PowerDNS server. Question now is how to get the HAProxy server (or another) to use the value of the internal DNS and not some config file or access list? – Update: Based on some suggestions below, it seems like reverse-proxy server(s) is the way to go. As I'll be rebalancing the domain-server mapping, these need to work instantly and the TTL on a DNS solution could be a problem. Any recommendations on software to use considering this domain-IP data is stored in a DB, and I'll need this to be performant?

    Read the article

  • how to define service runlevel order position?

    - by DmitrySemenov
    I setup bind-dlz and need mysql start prior NAMED when system starts here is what I have [root@semenov]# ./test.sh mysql 0:off 1:off 2:on 3:on 4:on 5:on 6:off named 0:off 1:off 2:off 3:on 4:on 5:on 6:off lrwxrwxrwx. 1 root root 15 Apr 15 18:57 /etc/rc3.d/S93mysql -> ../init.d/mysql lrwxrwxrwx. 1 root root 15 Apr 15 18:57 /etc/rc3.d/S90named -> ../init.d/named here is what I have in mysql init script # Comments to support chkconfig on RedHat Linux # chkconfig: 2345 84 16 # description: A very fast and reliable SQL database engine. # Comments to support LSB init script conventions ### BEGIN INIT INFO # Provides: mysql # Required-Start: $local_fs $network $remote_fs # Should-Start: ypbind nscd ldap ntpd xntpd # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop MySQL # Description: MySQL is a very fast and reliable SQL database engine. ### END INIT INFO so when I remove named from chkconfig and have there just mysql, it starts with order number 84: /etc/rc3.d/S84mysql - ../init.d/mysql but when I add named inside chkconfig it's order changes to 93: /etc/rc3.d/S93mysql - ../init.d/mysql as a result mysql will be starting after named and named will fail (no sql available) any ideas what I'm doing wrong? here is what I have in named init script # chkconfig: 345 90 16 # description: named (BIND) is a Domain Name Server (DNS) \ # that is used to resolve host names to IP addresses. # probe: true ### BEGIN INIT INFO # Provides: $named # Required-Start: $local_fs $network $syslog # Required-Stop: $local_fs $network $syslog # Default-Start:2 3 4 # Default-Stop: 0 1 2 3 4 5 6 # Short-Description: start|stop|status|restart|try-restart|reload|force-reload DNS server # Description: control ISC BIND implementation of DNS server ### END INIT INFO thanks, Dmitry

    Read the article

  • administrator user unable to login, suspicious user accounts "sky$", "admin$"

    - by mks
    I have a Windows 2008 R2 Standard (64 bit) running in a virtual machine. Suddenly from yesterday onwards I am not able to login as administrator. Nobody changed the password. Both in the console as well as using remote desktop I am unable to login. Whenever I login as Administrator I am getting this error: "The user name or password is incorrect" Nothing has changed in the machine and I have logged in the past successfully both through console and via remote desktop several time on the same machine. One strange behaviour I noticed is, I am seeing some additional user accounts if I try to login as other user. The suspicious user account are: sky$ admin$ SUPPORT_388945a0 Is it created by some malware/virus? Or is it some windows hidden account? Microsoft site says that SUPPORT_388945a0 is: The Support_388945a0 account enables Help and Support Service interoperability with signed scripts. This account is primarily used to control access to signed scripts that are accessible from within Help and Support Services. Administrators can use this account to delegate the ability for an ordinary user, who does not have administrative access over a computer, to run signed scripts from links embedded within Help and Support Services. These scripts can be programmed to use the Support_388945a0 account credentials instead of the user’s credentials to perform specific administrative operations on the local computer that otherwise would not be supported by the ordinary user’s account. When the delegated user clicks on a link in Help and Support Services, the script executes under the security context of the Support_388945a0 account. This account has limited access to the computer and is disabled by default. However I am not sure from where this "admin$" and "sky$" came. Anyone has similar experience?

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

  • Error creating ODBC connection to SQL Server 2008 Express

    - by DavidB
    When creating a System DSN, I get the error: Connection failed: SQLState: '08001' SQL Server Error: 2 [Microsoft][SQL Server Native Client 10.0]Named Pipes Provider: Could not open a connection to SQL Server [2]. Connection failed: SQLState: 'HYT00' SQL Server Error: 0 [Microsoft][SQL Server Native Client 10.0]Login timeout expired I'm running Vista Home Premium 64-bit SP2, and installed SQL Server 2008 Express Advanced without errors. I'll be using the database locally for an app installed on the same PC. I'm able to successfully connect with SQL Server Management Studio using Windows Authentication (my Windows account is a member of local Administrators), and I can successfully create a database with default ownership (defaults to my Windows account). SQL Server Configuration Manager shows that Shared Memory, TCP/IP, and Named Pipes are enabled for SQL Native Client 10.0 Configuration, SQL Native Client 10.0 Configuration (32bit), and SQL Server Network Configuration (SQLEXPRESS). The SQL Server (SQLEXPRESS) and SQL Server Reporting Services (SQLEXPRESS) services are running. When I create a system DSN, my driver choices are SQL server (sqlsrv32.dll 4-10-09), which gives a generic wizard, and SQL Server Native Client 10.0 (sqlncli10.dll 7-10-08), which gives the SQL Server 2008 wizard. I choose the latter. I enter name, description, and have tried both MyPCName and 127.0.0.1 for the server name (browsing turns up nothing). After clicking Next, I leave it at Integrated Windows authentication, and leave Connect to server for additional options checked. After clicking Next, I get the error above. I know it's probably a simple answer, (permission issue?) and I'm a SQL noob, so I appreciate anything that would point me in the right direction. Thanks!

    Read the article

  • Setting kernel memory for installing postgresql

    - by Matthieu Taymans
    My question is about setting the kernel shared memory for installing postgresql on mac osx 10.6.8. In the readme file of postgresql it is said: Shared Memory PostgreSQL uses shared memory extensively for caching and inter-process communication. Unfortunately, the default configuration of Mac OS X does not allow suitable amounts of shared memory to be created to run the database server. Before running the installation, please ensure that your system is configured to allow the use of larger amounts of shared memory. Note that this does not 'reserve' any memory so it is safe to configure much higher values than you might initially need. You can do this by editting the file /etc/sysctl.conf - e.g. % sudo vi /etc/sysctl.conf On a MacBook Pro with 2GB of RAM, the author's sysctl.conf contains: kern.sysv.shmmax=1610612736 kern.sysv.shmall=393216 kern.sysv.shmmin=1 kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.maxprocperuid=512 kern.maxproc=2048 Note that (kern.sysv.shmall * 4096) should be greater than or equal to kern.sysv.shmmax. kern.sysv.shmmax must also be a multiple of 4096. Once you have edited (or created) the file, reboot before continuing with the installation. If you wish to check the settings currently being used by the kernel, you can use the sysctl utility: % sysctl -a The database server can now be installed. I'm a real beginner with all this but need to instal postgresql for academic purposes do you know how i can set this kernel shared memory. Won't that be harmful for my system? Thank you in advance. Matthieu

    Read the article

  • How can I configure Cyrus IMAP to submit a default realm to SASL?

    - by piwi
    I have configured Postfix to work with SASL using plain text, where the former automatically submits a default realm to the latter when requesting authentication. Assuming the domain name is example.com and the user is foo, here is how I configured it on my Debian system so far. In the postfix configuration file /etc/main.cf: smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $mydomain The SMTP configuration file /etc/postfix/smtpd.conf contains: pwcheck_method: saslauthd mech_list: PLAIN The SASL daemon is configured with the sasldb mechanism in /etc/default/saslauthd: MECHANIMS="sasldb" The SASL database file contains a single user, shown by sasldblistusers2: [email protected]: userPassword The authentication works well without having to provide a realm, as postifx does that for me. However, I cannot find out how to tell the Cyrus IMAP daemon to do the same. I created a user cyrus in my SASL database, which uses the realm of the host domain name, not example.com, for administrative purpose. I used this account to create a mailbox through cyradm for the user foo: cm user.foo IMAP is configured in /etc/imapd.conf this way: allowplaintext: yes sasl_minimum_layer: 0 sasl_pwcheck_method: saslauthd sasl_mech_list: PLAIN servername: mail.example.com If I enable cross-realm authentication (loginrealms: example.com), trying to authenticate using imtest works with these options: imtest -m login -a [email protected] localhost However, I would like to be able to authenticate without having to specify the realm, like this: imtest -m login -a foo localhost I thought that using virtdomains (setting it either to userid or on) and defaultdomain: example.com would do just that, but I cannot get to make it work. I always end up with this error: cyrus/imap[11012]: badlogin: localhost [127.0.0.1] plaintext foo SASL(-13): authentication failure: checkpass failed What I understand is that cyrus-imapd never tries to submit the realm when trying to authenticate the user foo. My question: how can I tell cyrus-imapd to send the domain name as the realm automatically? Thanks for your insights!

    Read the article

< Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >