Search Results

Search found 30279 results on 1212 pages for 'database drift'.

Page 1039/1212 | < Previous Page | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046  | Next Page >

  • Copy/paste filtered column in Excel - error message

    - by hazymat
    Firstly I should state that I don't believe in spreadsheets; my normal mode of operation is that data should either exist in a database or a text file... However - employment requires... In short, I have filtered a worksheet by column A, and I want to copy/paste from column B to column C. Obviously I don't wish to copy/paste values from rows that were filtered-out here. The above sounds ridiculously simple, right? First I tried simply copy/pasting on the filtered worksheet. This appeared to select and copy only the filtered data, however pasting appeared to insert values into hidden/filtered rows - as you might expect. So my initial research suggests I may wish to select the filtered data and press Alt+; (that is, ALT plus semicolon), which is a shortcut key for Goto Special Select Visible. Then just copy-paste. CTRL+C correctly copies the filtered data, however when I go to paste the values into another column, it pastes into hidden rows as well. Okay, so perhaps I should also "Select Visible" on the cells I wish to paste into as well? Nope - that gives me the error That command cannot be used on multiple selections. What am I doing wrong?!

    Read the article

  • arch openldap authentication failure

    - by nonus25
    I setup the openldap, all look fine but i cant setup authentication, #getent shadow | grep user user:*::::::: tuser:*::::::: tuser2:*::::::: #getent passwd | grep user git:!:999:999:git daemon user:/:/bin/bash user:x:10000:2000:Test User:/home/user/:/bin/zsh tuser:x:10000:2000:Test User:/home/user/:/bin/zsh tuser2:x:10002:2000:Test User:/home/tuser2/:/bin/zsh from root i can login as a one of these users #su - tuser2 su: warning: cannot change directory to /home/tuser2/: No such file or directory 10:24 tuser2@juliet:/root i cant login via ssh also passwd is not working #ldapwhoami -h 10.121.3.10 -D "uid=user,ou=People,dc=xcl,dc=ie" ldap_bind: Server is unwilling to perform (53) additional info: unauthenticated bind (DN with no password) disallowed 10:30 root@juliet:~ #ldapwhoami -h 10.121.3.10 -D "uid=user,ou=People,dc=xcl,dc=ie" -W Enter LDAP Password: ldap_bind: Invalid credentials (49) typed password by me is correct /etc/openldap/slapd.conf access to dn.base="" by * read access to dn.base="cn=Subschema" by * read access to * by self write by users read by anonymous read access to * by dn="uid=root,ou=Roles,dc=xcl,dc=ie" write by users read by anonymous auth access to attrs=userPassword,gecos,description,loginShell by self write access to attrs="userPassword" by dn="uid=root,ou=Roles,dc=xcl,dc=ie" write by anonymous auth by self write by * none access to * by dn="uid=root,ou=Roles,dc=xcl,dc=ie" write by dn="uid=achmiel,ou=People,dc=xcl,dc=ie" write by * search access to attrs=userPassword by self =w by anonymous auth access to * by self write by users read database hdb suffix "dc=xcl,dc=ie" rootdn "cn=root,dc=xcl,dc=ie" rootpw "{SSHA}AM14+..." there are some parts of that conf file /etc/openldap/ldap.conf looks : BASE dc=xcl,dc=ie URI ldap://192.168.10.156/ TLS_REQCERT allow TIMELIMIT 2 so my question is what i am missing that ldap not allow me login by using password ?

    Read the article

  • MySQL extension of PHP not working

    - by Víctor
    In a Debian server, and after intallation and removal of SquirrelMail (with some downgrade and upgrade of php5, mysql...) the MySQL extension of PHP has stopped working. I have php5-mysql installed, and when I try to connect to a database through php-cli, i connect successfully, but when I try to connect from a web served by Apache I cannot connect. This script, run by php5-cli: echo phpinfo(); $link = mysql_connect('localhost', 'user, 'password'); if (!$link) { die('Could not connect: ' . mysql_error()); } echo 'Connected successfully'; mysql_close($link); Prints the phpinfo, which includes "/etc/php5/cli/conf.d/mysql.ini", and also the MySQL section with all the configuration: SOCKET, LIBS... And then it prints "Connectes successfully". But when run by apache accessed by web browser, it displays the phpinfo, which includes "/etc/php5/apache2/conf.d/mysql.ini", but has the MySQL section missing, and the script dies printing "Fatal error: Call to undefined function mysql_connect()". Note that both "/etc/php5/cli/conf.d/mysql.ini" and "/etc/php5/apache2/conf.d/mysql.ini" are in fact the same configuration, because I have in debian the structure: /etc/php5/apache2 /etc/php5/cgi /etc/php5/cli /etc/php5/conf.d And both point at the same directory: /etc/php5/apache2/conf.d -> ../conf.d /etc/php5/cli -> ../conf.d Where /etc/php5/conf.d/mysql.ini consists of one line: extension=mysql.so So my question is: why is the MySQL extension for PHP not working if I have the configuration included just in the same way as in php-cli, which is working? Thanks a lot!

    Read the article

  • Cannot access server shares over VPN

    - by DuncanDavies
    I've set up a single hosted server to use as a development environment for a web-based application. The web app is served up fine on port 80, however I'm struggling to get my VPN to behave how I'd expect so the developers don't have the access they require. The VPN connects fine and I can access the back-end database (SQL Server) which resides on the server with the client tools from the laptops. However they cannot access any shared folders. The server's local IP address is 10.x.x.x, and I've assigned a static IP address pool to RRAS (of 192.168.100.1 - 20). The clients pick up a valid IP Address (i.e. 192.168.100.9) when they connect. There is no name resolution setup, DNS or WINS. When connected via VPN the clients can ping the server (192.168.100.1) by IP Address, but cannot map a drive to a shared folder (net use * \\192.168.100.1\xxxxx) - I get 'System error 53 has occurred. The network path was not found.' I don't understand why I can ping by the ip, but not map by it. Some details: Server OS is Windows 2008 (Datacenter) VPN is SSTP using RRAS Clients are all Windows 7 I've tried temporarily disabling the firewalls So, why can we not access the file system when everything else (ping, RDP, SQL Server clients tools) works? Thanks for your help Duncan

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • Webserver max CPU when apache and MYSQL are ran together

    - by Tim
    This website has been running fine without issues, Recently it went down. After some investigation it looks like the combo of MYSQL and Apache bring the box to its knees. Apache can run find serving static web pages and MYSQL can run fine when the website isn't working. As soon as the website is enabled with SQL running the CPU on the box remains at 100%. Picture of the usage: http://i.stack.imgur.com/GG2NC.png I've checked the sql database for errors, tried tuning nearly every parameter in apache/sql's conf file for performance. The server is a redhat based box running the latest software packages. Any help/suggestions are welcome. Doing an strace on a high cpu apache process I see the following: read(14, "", 8192) = 0 close(14) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 14 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) connect(14, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"...}, 110) = 0 setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) setsockopt(14, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 Here is what I see from a mysql process: futex(0x86fc9a4, FUTEX_WAIT_PRIVATE, 39, NULL) = 0 futex(0x86fc734, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x86fc734, FUTEX_WAKE_PRIVATE, 1) = 0 gettimeofday({1301465020, 141613}, NULL) = 0 clock_gettime(CLOCK_REALTIME, {1301465020, 141699633}) = 0 futex(0x8707a64, FUTEX_WAIT_PRIVATE, 1, {4, 999913367}) = 0 futex(0x8707a40, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x8707a40, FUTEX_WAKE_PRIVATE, 1) = 0 exit_group(0) = ?

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • Browser says "Waiting for www.xyz.com" for a very long time

    - by Phil
    When I load my website (hosted with Ipage). The browser often takes an incredible long time saying "Waiting for www.xyz.com ..." before any elements of the site actually appear. After this "Waiting for" process, the text, images and everything else actually load quite fast. I contacted my host with my tracert result and they said they optimized my website database and increased the memory available to PHP on my account to 64 Mb.They also said they have checked the issue by accessing my website and found that it is loading fine without any slowness. It seems to be a temporary issue. Please try to access your website with different browser and network. I tried different browsers and networks but this "Waiting for" process always takes too long. My website is http://www.surreyextra.com/ . It's Wordpress and BuddyPress. I'm in the UK while Ipage host is placed in the USA, can this potentially be a problem? I have tried a number of optimizations, like minifying my CSS and JS files and use catching but the problem hasn't improved. So is it my host's fault, should I contact them again?

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • pg_dump not working - do I need to change order of $PATH?

    - by A4J
    I'm trying to set the $PATH to pick up the latest version of pg_dump as I'm currently getting a mismatch error while doing a migrate in my Rails app (I recently changed the schema type to SQL). I have added a new file in /etc/profile.d called pg_dump.sh, and inside that put: PG_DUMP=/usr/pgsql-9.1 export PG_DUMP PATH=$PATH:$PG_DUMP/bin export PATH On looking at echo $PATH, I get: /usr/local/rvm/gems/ruby-1.9.3-p194/bin:/usr/local/rvm/gems/ruby-1.9.3-p194@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p194/bin:/usr/local/rvm/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/pgsql-9.1/bin:/root/bin And I still get the error. Do I need to change the order? If so any ideas how? Output of 'ls /usr/pgsql-9.1/bin': clusterdb droplang pg_archivecleanup pg_ctl pg_standby psql createdb dropuser pg_basebackup pg_dump pg_test_fsync reindexdb createlang ecpg pgbench pg_dumpall pg_upgrade vacuumdb createuser initdb pg_config pg_resetxlog postgres vacuumlo dropdb oid2name pg_controldata pg_restore postmaster And output of 'which pg_dump': /usr/bin/pg_dump Error message on running cap 'deploy:migrate': ** [out :: 46.4.9.199] pg_dump: server version: 9.1.4; pg_dump version: 8.4.11 ** [out :: 46.4.9.199] pg_dump: aborting because of server version mismatch ** [out :: 46.4.9.199] rake aborted! ** [out :: 46.4.9.199] Error dumping database output of 'pg_dump --version': pg_dump (PostgreSQL) 8.4.11

    Read the article

  • Whats the best way to update Ubuntu 9.04?

    - by Fu86
    I have a Ubuntu 9.04 server which has no packase support anymore. If I want to update my package lists, I get th following errors: Err http://de.archive.ubuntu.com jaunty-security/multiverse Packages 404 Not Found [IP: 141.30.13.10 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/jaunty/main/binary-amd64/Packages 404 Not Found [IP: 141.30.13.10 80] .... I read at the official Ubuntu-Support-Page, that there is a update-manager-core-Package to upgrade to a new release. Unfortunately I dont have this package installed and I am unable to install it because of the lack of package sources. EDIT: Installing the package update-manager-core from another release doesn't work because it depends on a higher version of python-apt. (Tried with 10.04) $ dpkg -i update-manager-core_0.134.7_amd64.deb Selecting previously deselected package update-manager-core. (Reading database ... 28743 files and directories currently installed.) Unpacking update-manager-core (from update-manager-core_0.134.7_amd64.deb) ... dpkg: dependency problems prevent configuration of update-manager-core: update-manager-core depends on python-apt (>= 0.7.13.4ubuntu3); however: Version of python-apt on system is 0.7.9~exp2ubuntu10. update-manager-core depends on python-gnupginterface; however: Package python-gnupginterface is not installed. dpkg: error processing update-manager-core (--install): dependency problems - leaving unconfigured Errors were encountered while processing: update-manager-core So, whats the best way to upgrade to to current Release without reinstalling the complete (virtual) server?

    Read the article

  • Not able to run firefox in a head-less Ubuntu server 9.10

    - by Julio J.
    I need to run Firefox in my server in order to execute some Selenium tests from Hudson. I would love no to have to install a complete gui. So I installed Xvfb in order to fake the Gui (I understand it this way correct me if my assumptions are wrong). After some time trying to make it work, I'm stuck with the next situation: $ sudo Xvfb -ac :99 & [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) $ firefox [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! [config/dbus] couldn't register object path (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) Xlib: extension "RANDR" missing on display ":99.0". GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: /bin/dbus-launch terminated abnormally without any error message) I'm runnig firefox without installing it from the repositories. And I'm getting a socket timeout when I try to run the selenium tests, so I guess the problem is in Firefox and Xvfb. I have installed already the nex package: i gconf-defaults-service - GNOME configuration database system (system defaults service) That in some forums suggest to be a fix, that in my case doesn't work. Any explanation about the problem and ways of solving it without installing a full gui, will be very helpful.

    Read the article

  • Trouble with backslash characters and rsyslog writing to postgres

    - by Flimzy
    I have rsyslog 4.6.4 configured to write mail logs to a PostgreSQL database. It all works fine, until the log message contains a backslash, as in this example: Jun 12 11:37:46 dc5 postfix/smtp[26475]: Vk0nYDKdH3sI: to=<[email protected], relay=----.---[---.---.---.---]:25, delay=1.5, delays=0.77/0.07/0.3/0.35, dsn=4.3.0, status=deferred (host ----.---[199.85.216.241] said: 451 4.3.0 Error writing to file d:\pmta\spool\B\00000414, status = ERROR_DISK_FULL in "DATA" (in reply to end of DATA command)) The above is the log entry, as written to /var/log/mail.log. It is correct. The trouble is that the backslash characters in the file name are interpreted as escapes when sent to the following SQL recipe: $template dcdb, "SELECT rsyslog_insert(('%timereported:::date-rfc3339%'::TIMESTAMPTZ)::TIMESTAMP,'%msg:::escape-cc%'::TEXT,'%syslogtag%'::VARCHAR)",STDSQL :syslogtag, startswith, "postfix" :ompgsql:/var/run/postgresql,dc,root,;dcdb As a result, the rsyslog_insert() stored procedure gets the following value for as msg: Vk0nYDKdH3sI: to=<[email protected], relay=----.---[---.---.---.---]:25, delay=1.5, delays=0.77/0.07/0.3/0.35, dsn=4.3.0, status=deferred (host ----.---[199.85.216.241] said: 451 4.3.0 Error writing to file d:pmtaspoolB The \p, \s, \B and \0 in the file name are interpreted by PostgreSQL as literal p, s, and B followed by a NULL character, thus early-terminating the string. This behavior can be easiily confirmed with: dc=# SELECT 'd:\pmta\spool\B\00000414'; ?column? -------------- d:pmtaspoolB (1 row) dc=# Is there a way to correct this problem? Is there a way I'm not finding in the rsyslog docs to turn \ into \\?

    Read the article

  • My .htaccess file re-directed problems?

    - by Glenn Curtis
    I am hoping you can help me! Below is my .htaccess files for my Apache server running on top of Ubuntu server. This is my local server which I installed so I can develop my site on this instead of using my live site! However i have all my files and the database on my localhost now but each time I access my server, vaio-server (its a sony laptop), it just takes me to my live site! Now eveything is in the root of Apache, /var/www - its the only site I will develop on this system so I don't need to config this to look at any many than this one site! I think thats all, all the Apache files, site-available/default ect are as standard. - Please Help!! Many Thanks Glenn Curtis. DirectoryIndex index.php index.html # Upload sizes php_value upload_max_filesize 25M php_value post_max_size 25M # Avoid folder listings Options -Indexes <IfModule mod_rewrite.c> Options +FollowSymlinks RewriteEngine on RewriteBase / # Maintenance #RewriteCond %{REQUEST_URI} !/maintenance.html$ #RewriteRule $ /maintenance.html [R=302,L] #Redirects to www #RewriteCond %{HTTP_HOST} !^vaio-server [NC] #RewriteCond %{HTTPS}s ^on(s)|off #RewriteRule ^(.*)$ glenns-showcase.net/$1 [R=301,QSA,L] #Empty string RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule>

    Read the article

  • Why Is Volume Shadow Copy Services stopping?

    - by David Mackintosh
    I am running Windows 7 Professional, 64-bit. I am running a backup-over-the-internet software client which depends on the Volume Shadow Copy Services running. Since I installed Service Pack 1 (or rather, didn't object when Windows Update forced Service Pack 1 on me) the backup service is failing to back everything up because VSC isn't running. Most of the time it fails to back up such noise as the Security Essentials database or the Messenger Live contact list -- stuff I really don't care about -- but I don't want to fall into the trap of accepting an Error-state backup as "normal". At the recommendation of the backup software, I have set the VSC service startup mode to be Automatic. When I look in the Event Log, System channel I can see at boot time: The Volume Shadow Copy service entered the running state. ...and then two or three minutes later: The Volume Shadow Copy service entered the stopped state. How do I figure out why VSC is stopping? At the suggestion of the backup vendor, I have already followed the suggestions from http://support.microsoft.com/default.aspx/kb/940184 net stop SENS net stop EventSystem net start EventSystem net start SENS net stop COMSysApp net stop SwPrv net stop VSS cd /d C:\Windows\system32 regsvr32 ole32.dll /s regsvr32 oleaut32.dll /s regsvr32 vss_ps.dll /s vssvc /register /s regsvr32 /i swprv.dll /s regsvr32 /i eventcls.dll /s regsvr32 es.dll /s regsvr32 stdprov.dll /s regsvr32 vssui.dll /s regsvr32 msxml.dll /s regsvr32 msxml3.dll /s regsvr32 msxml4.dll /s net start SwPrv net start VSS net start ProtectedStorage ...and per http://support.microsoft.com/kb/940184 I have deleted the key tree HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions I have also run chkdsk /F and chkdsk /R on both permanent hard disks. (I had a similar problem with another computer (same OS, same failure, same start point after SP1 install) but the problem went away when I forced Volume Shadow Copy Services to Automatic startup rather than Manual. I did not have to resort to following the Microsoft KB instructions.)

    Read the article

  • Oracle logical standby fails with ORA-01919

    - by DCookie
    I have an Oracle logical standby database being managed via data guard. Just this morning the redo apply process began failing with an ORA-01919 error, indicating one of our application roles did not exist. However, I can see the role on both primary and standby databases. We also have a physical standby that has long since applied the redo where this is happening on the logical, without issue. I have opened an SR with Oracle. I was wondering if anyone out there has seen this before. I guess I should mention: Oracle 10.2.0.4, Win2003 Server SP2. UPDATE: So far, Oracle Support has not provided an answer. I thought I'd post here what I have learned so far. It appears that a grant of DBA on the primary host to a role works fine for users granted the role. It does not work on the logical standby. IOW: create role TEST; grant dba to TEST; grant TEST to auser; connect auser set role TEST; grant <existing role> to <existing user>; This works on the primary instance but fails on the logical. A workaround appears to be to grant each role on the primary to the role TEST with admin option in the logical: grant <existing role> to TEST with admin option; <== do this on the logical standby Then the command works on the logical standby.

    Read the article

  • How to determine the Kerberos realm from an LDAP directory?

    - by tstm
    I have two Kerberos realms I can authenticate against. One of them I can control, and the other one is external from my point of view. I also have an internal user database in LDAP. Let's say the realms are INTERNAL.COM and EXTERNAL.COM. In ldap I have user entries like this: 1054 uid=testuser,ou=People,dc=tml,dc=hut,dc=fi shadowFlag: 0 shadowMin: -1 loginShell: /bin/bash shadowInactive: -1 displayName: User Test objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uidNumber: 1059 shadowWarning: 14 uid: testuser shadowMax: 99999 gidNumber: 1024 gecos: User Test sn: Test homeDirectory: /home/testuser mail: [email protected] givenName: User shadowLastChange: 15504 shadowExpire: 15522 cn: User.Test userPassword: {SASL}[email protected] What I would like to do, somehow, is to specify per-user basis to which authentication server / realm the user is authenticated against. Configuring kerberos to handle multiple realms is easy. But how to I configure other instances, like PAM, to handle the fact that some users are from INTERNAL.COM and some from EXTERNAL.COM? There needs to be an LDAP lookup of some kind where the realm and the authentication name is fetched from, and then the actual authentication itself. Is there a standardized way to add this information to LDAP, or look it up? Are there some other workarounds for a multi-realm user base? I might be ok with a single realm solution, too, as long as I can specify the user name - realm -combination for the user separately.

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • How to configure a trusted connection between IIS 7 and SQL Server 2005?

    - by user1180652
    How do configure a trusted connection between IIS 7 and SQL Server 2005? My webapp was working fine with Windows Authentication enabled in IIS. Now, in order to solve a problem, we need to use a trusted connection. Unfortunately, enabling the trusted connection in the web.config broke the webapp. Oddly enough, when I run this application with trusted connection from my local dev machine (using the Cassini web server) IIS (Windows Server 2008) is running on one machine. The database (SQL Server 2005 but could migrate to 2008) is running on another machine. We are on a Windows domain running AD. All traffic is within our own firewall - no public access. Beyond that, I can't provide much info but I can find it. We're very "compartmentalized" (we have server people, security people, oracle people, SQL Server people, etc.) Thanks! Update 02/14/2012 0902: The webapp is now functional (app no longer broken) but the main issue is still unresolved. Now I have the app's application pool running as a domain account with permissions on the SQL Server box and IIS box. We were using this account to run the application but, and here's the problem, we need to log the real user name that made a change. When using the service account, the name of that service account appeared in the audit tables, making the auditing quite useless. So, not I'm at least running again. The connection string in the web.config is using "Trusted_Connection=True", the appPool is using a domain account with access to both boxes, BUT when I make a change (logged in as me) the name of the service account (appPool identity) is still logged in the audit tables. I also manually granted full permissions to the service account on the webapp folder. What do I need to do in order to log my name, not the service account, in the audit tables? Everything I'm reading says I need to establish a trusted connection between the two servers.

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • Msg 10054, Level 20, State 0, Line 0 Error when altering a stored procedure to add a couple of curso

    - by doug_w
    We have a home-rolled backup stored procedure that uses xp_cmdshell to create and clean up database backups. We have an instance that is 2005 sp3 that we are trying to deploy this script to. I am at a bit of a loss for why it is not working. When I execute the create it runs for about 30 seconds and yields the following error: Msg 10054, Level 20, State 0, Line 0 A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) In my tinkering I discovered that by removing the cursors that actually do the work it will allow me to create the stored procedure (not very helpful for me though). If I add the cursors back in using an alter the error returns. I would be curious if someone has experienced this problem and knows of a solution or work around. I am not opposed to posting the source, it is just lengthy. Things I have checked: Error Logs No dump files in the log directory Thanks in advance for the help.

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • A-2-Z web hosting on Amazon AWS

    - by JDelage
    All, I am studying web dvp, and one of my classes is project-based. We have to build a functional site that demonstrate our understanding of: HTML, CSS, Javascript, php, MySQL, And potentially Ajax or some other web component. For the project, we can use a local server using WampServer and basically build the site entirely on our laptop. If I have time, I would like to create a real site, and I thought it would be a good way to familiarize myself with Amazon's AWS services. So if I purchase a domain name, can I rely on AWS to host the site from A-to-Z? I understand I can use AWS to host content, the database, and do the background computations, if needed. What else do I need and what are the parts that AWS cannot help me with? Second, is there good documentation for a beginner to navigate AWS and learn how to use it (either on Amazon, or some 3rd party sites, or even a good book, as long as is up to date). The ideal documentation would be a tutorial on creating a web site from a-to-z on AWS, as detailed as possible. As you can guess, I have limited understanding of the IT issues. I have 0 Linux or sysadmin experience, but this is a good opportunity to change that. I hope you can help me. Thank you, JDelage PS: Please keep the answers AWS-specific. At this point, I am only interested in alternative services to the extent that they plug a hole in Amazon's offering.

    Read the article

  • IP Blacklists and suspicious inbound and outbound traffic

    - by Pantelis Sopasakis
    I administer a web server and recently we had our IP banned (!) from our host after they received a notification e-mail for abuse. In particular our server is allegedly involved in spam attacks over HTTP. The content of the abuse report email we received was not much informative - for example the IP addresses our server is supposed to have attacked against are not included - so I started a wireshark session checking for suspicious traffic over TCP/HTTP while trying to locate possible security holes on the system. (Let me note that the machine runs a Debian OS). Here is an example of such a request... Source: 89.74.188.233 Destination: 12.34.56.78 // my ip Protocol: HTTP Info: GET 'http://www.media.apniworld.com/image.php?type=hv' HTTP/1.0 I manually blacklisted this host (as well as some other ones) blocking them with iptables, but I can't keep on doing manually all day long... I'm looking for an automated way to block such IPs based on: Statistical analysis, pattern recognition or other AI-based analysis (Though, I'm reluctant to trust such a solution, if exists) Public blacklists Using DNSBL I actually found out that 89.74.188.233 is blacklisted. However other IPs which are strongly suspicious like 93.199.112.126 (i.e. http://www.pornstarnetwork.com/account/signin), unfortunately were not blacklisted! What I would like to do is to automatically connect my firewall with DNSBL (or some other blacklist database) and block all traffic towards blacklisted IPs or somehow have my local blacklist automatically updated.

    Read the article

  • Timeout settings for Remote Desktop Sessions to lock

    - by atroon
    Our office uses a Windows 2003 server to provide access to an accounting application. Recently I was asked to increase the amount of time it takes for the session to lock itself and require the entry of the user's password to resume. That seems to be about ten minutes, at present. I am familiar with group policy and have tweaked those settings to scavenge sessions (and thereby licenses) from sessions that have been disconnected (by the user closing the mstsc.exe client or by a network issue). That's simple and straightforward. But I can't find anything in GP to allow a longer time period before the RDP client window goes black and then, when clicked upon, requires a username and password to resume the session. I must admit this would be nice personally as well, since most of my time is spent documenting the application and/or monitoring its database, so I usually have a window open to the terminal server along with the rest of the staff in the accounting center, but I interact with it very little. I usually enter my password 10-15 times per workday, but I'm pretty good at it by now. ;) So, can this timeout period be adjusted, or are we out of luck?

    Read the article

< Previous Page | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046  | Next Page >