Search Results

Search found 31628 results on 1266 pages for 'legacy database'.

Page 672/1266 | < Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >

  • When to raise domain functional level?

    - by Joel Coel
    We very recently completed a project to retire two old domain controllers running Server 2003 R2. They are now replaced with shiny new 2008 R2 boxes. However, the functional level of the domain has not yet been updated for the 2008 R2 servers, just in the long-shot case of the need for a rollback to the old controllers. I expect to have the all clear to update the domain by next weekend. I also want to note that our desktop clients are still 95% Windows XP. However, we're about to start a project to update our 200 or so clients to Windows 7 before the end of the calendar year. Is there any advantage to holding the domain at the 2003 functional level while we are still supporting more Windows XP than Windows 7, especially given that some of the management stations are still XP? Update: I forgot to mention earlier that we still have a pair of windows 2000 servers (not domain controllers) that support some legacy software. I'm working to replace those, but in the meantime I need to be sure that Windows 2000 can still participate in a 2008 R2 domain.

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • Are there any other causes of this error that are NOT related to initial setup?

    - by LordScree
    I'm trying to diagnose an issue at a customer site. They are receiving the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server I've seen this a few times, but only during the initial setup - it's often caused by one of the following: The database server is turned off The network connection between the database server and the application is closed or somehow blocked (e.g. a firewall) The SQL Server instance is not set up to receive remote connections from the application server (e.g. TCP is turned off, remote connections are disabled, or the "SQL Server Browser" service is stopped/disabled) However, if I assume that no configuration changes have been made, I'm trying to postulate on what the reason might be for getting this error at a random point after the initial setup. My initial thought is: SQL Server machine has run out of resources (e.g. RAM) and is unable to accept new requests from the application server Is this a valid theory? What other possible causes are there of this error that are not related to the initial setup of the server / application connection? Or is it simply impossible that this error could occur without a configuration change having been made (either on the SQL Server side, application side, or somewhere in-between (network))? NOTE: I believe this question differs from the plethora of questions related to this error message because the application and server have been talking to each other quite happily until now (most, if not all, other questions seem to relate to initial setup).

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • Solaris 10: How to remove devices from a zpool with /usr currently mounted?

    - by cali-spc
    I use Solaris 10 on SPARC. I have /usr legacy mounted on a zpool 'usr-pool'. I now need to move some of the devices in usr-pool to another zpool which is running out of room. What is the safest way for me to do this? I already know that (since my zpool is not mirrored) I need to destroy and recreate the zpool. I know how to backup and restore a zfs snapshot. However... I'm stumped on how to unmount usr-pool without losing access to the commands I need on /usr to complete the backup/restore. Cursory research indicated that I should boot to OpenBoot (init 0) and then 'boot cdrom -s'. I did this but none of the zpools are accessible on that runlevel. I also read I could just copy /usr to another location, symlink /usr to that location, then do my backup/restore. Is that safe to do? I would appreciate some guidance. S.

    Read the article

  • connections in FIN_WAIT and CLOSE_WAIT state

    - by Raj
    I would like to elaborate the setup so You guys can understand the question and answer more accurately. I have HAProxy as load-balancer, 4 webservers (apache 2.2.3) and one database server (MySQL 5). I am monitoring these servers by nagios. I have disabled the keepalive on apache as we have only 8GB of memory. Now what happens whenever I receive alerts for high memory and cpu utilization, I have observed that the connections from apache to database server hang in established mode (keepalive with timeout value of 7200) and at other side means connections between haproxy and apache shows status as FIN_WAIT on haproxy server and CLOSE_WAIT at apache side. I also see the huge memory swapping and apache taking the most of the memory. I did strace on apache process and did not find any information. strace gets attached to apache process but did not produce any output. The processlist on Mysql server show s those processes in sleep mode. The application on webserver is Magento a php application. if you need further information please let me know. Thanks.

    Read the article

  • Connecting to a subdomain severs the connection to the domain itself. What's going on?

    - by TheAgent
    Hi all. We have a website on a third-party server (server leased and shared with other websites) and the server provides access to our SQL Server database through a subdomain in the form of mssql.DomainName.com. I was told to use SQL Management Studio Express to connect to this subdomain in order to manage the database. After a few tries and getting many "Timeout" messages, I finally manage to connect to the server; everything's fine. But now I can't connect to DomainName.com anymore. Trying to browse DomainName.com using Firefox, it tries to "lookup" DomainName.com address and fails, telling me "the server was not found". I have to disconnect Management Studio from the server and wait a couple of hour for DomainName.com to become available again, and after that, trying to reconnect to the SQL Server again repeats the scenario. While I can't browse DomainName.com directly, I can use a proxy to connect to it, meaning that the problem is somehow related to a DNS my computer tries to ask to translate the name to the corresponding IP. Anyone seen anything like this before? Any ideas? Thanks in advance.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • DBCC CHECKDB fails and quits job, ambiguous error message.

    - by ddono25
    I received a notice that one of our servers' DBCC CHECKDB for all databases has been failing the past four times it has been run. We don't have any data prior to that, but it doesn't look like it has been succeeding for awhile. There are no errors in the log file only: DBCC results for 'sys.sysxmlfacet'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 112 rows in 1 pages for object "sys.sysxmlfacet". [SQLSTATE 01000] I ran a DBCC CHECKDB using sp_MSForEachDB to get more accurate results and had the same error on the same DB but at a separate point: DBCC results for 'NameValuePair_Greek_CI_AS'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 0 rows in 0 pages for object "NameValuePair_Greek_CI_AS". [SQLSTATE 01000] Also, the error-log states that the DBCC completed without errors for this database. I can't figure out how to track down this ambiguous issue that only happens on this database out of the dozens on this server. Any help is appreciated!

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Powershell Script to help archive company email

    - by sec_goat
    I am attempting to use powershell to gather emails that pertain to a certain subject, so that these correspondences might be turned over to a legal department. I am having a couple of issues here that I would like some assistance getting past. I run the following command: get-mailbox -Database "Mailbox Database" | Export-Mailbox -ContentKeywords "Keywords To Search" -TargetMailbox "sec_goat" -TargetFolder EmailSearch -StartDate "01/13/2011 12:01:00 This has pretty much done what I want, and returned a boat load of emails, however it has also flooded my inbox with hundreds of blank calendars and contact lists. I realize now I should have used the exclusion on these folders, as well as a test environment (which we don't have). 1.How can I clean up this script to not include all the blank folders, contacts and calendars that DO NOT match the keywords search? 2.How do I clean up hundreds of blank contact lists and calendars in my mailbox without right clicking and deleting each one? EDIT: I edited the post to change the scope of the question. I think my focus is less on the legal perspective and more on the "How can I clean up my mess and make future archives less messy and painful?"

    Read the article

  • Dump Trac DB on Windows/XAMPP

    - by Whiteknight
    I have a Trac instance running on a WindowsXP machine with XAMPP. I am trying to migrate the trac instance to a newer Linux-based machine. However, I'm having a hard time getting the database to cooperate. I try to dump the db with this command: sqlite3 C:\tracroot\db\trac.db ".dump" >> mysqldump.sql But the generated file is mostly empty: BEGIN TRANSACTION; COMMIT; So that's not right. For the record my trac instance is running now and appears to have full access to all the contents of the DB. But sqlite3 (located in C:\xampp\apache\bin) can't seem to get any information from the file. The DB file itself has the header "SQLite format 3", so that seems to be correct. I need to know one of two things: How to get this dump working OR An alternate way to migrate the Trac database to the new machine. Update: When I try to open the .db file in sqlite3, I get the error Error: unsupported file format. What format is it in, and why is it unsupported?

    Read the article

  • mysqld - master to slave replication using rsync innodb, sequence number issues

    - by Luis
    I've read several of the related topics posted here, but I have not been able to avoid this innodb error. The steps I've taken to replicate data from a Slackware server - 5.5.27-log (S) to a FreeBSD slave - 5.5.21-log (F) were these: (S) flush tables with read lock; (S) in another terminal show master status; (S) stop mysqld via command line in third terminal; (F) while both servers are stopped, rsync mysql datadir from (S), excluding master.info, mysql-bin and relay-* files; (F) start mysqld (skip-slave) 121018 12:03:29 InnoDB: Error: page 7 log sequence number 456388912904 InnoDB: is in the future! Current system log sequence number 453905468629. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. This kind of error happens for a lot of tables. I know I can use dump, but the database is large, ca. 70GB and the systems are slow (old), so would like to get this replication to work with data transfer. What should I try to solve this issue?

    Read the article

  • using virtual machine like mySql server

    - by ffmm
    i'm developing a java program and i need a database. Now i'm using MAMP and it's pretty easy but i would have a virtual machine (ubuntu server) and i need to connect my java program with this virtual machine using vitualBox. the situation: I installed VirtualBox on my mac and I installed an ubuntu-server machine set "bridge adapter" in the network settings of VB I installed mysql on ubuntu-server and i created a simple database (all work well by ubuntu) doing ifconfig by ubuntu I get the ip: 192.168.1.217 so in the java program i made this function: public static Connection connect(String host, int port, String dbName, String user, String passwd) { Connection dbConnection = null; try { String dbString = null; Class.forName("com.mysql.jdbc.Driver").newInstance(); dbString = "jdbc:mysql://" + host + ":" + port + "/" + dbName; dbConnection = DriverManager.getConnection(dbString, user, passwd); } catch (Exception e) { System.err.println("Failed to connect with the DB"); e.printStackTrace(); } return dbConnection; } and in the main() i use: Connection con = connect(1, "192.168.1.217", 3306, "Ciao", "root", "cocacola"); 3306 was a default value. I don't know if is correct, it works on mamp, but…. how I can find the correct port that I have to use with VB? when I ran the program I get the catch excepion… what's wrong? ps: i have to install apache o something else?

    Read the article

  • cause for mysql crash

    - by user1322092
    A cron job automatically restarted my mysql database. What's the cause for the crash, or can you suggest how to resolve or monitor. I would REALLY appreciate your input. 120715 14:38:58 mysqld started 120715 14:38:58 InnoDB: Started; log sequence number 0 411137570 120715 14:38:58 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 120715 15:14:21 [Note] /usr/libexec/mysqld: Normal shutdown 120715 15:14:23 InnoDB: Starting shutdown... 120715 15:14:25 InnoDB: Shutdown completed; log sequence number 0 411166467 120715 15:14:25 [Note] /usr/libexec/mysqld: Shutdown complete 120715 08:14:25 mysqld ended 120715 08:14:26 mysqld started 120715 8:14:26 InnoDB: Started; log sequence number 0 411166467 120715 8:14:26 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 121212 09:15:32 mysqld started InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121212 9:15:58 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121212 9:17:28 InnoDB: Started; log sequence number 0 554145193 121212 9:17:57 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution

    Read the article

  • How to chain GRUB2 for Ubuntu 10.04 from Truecrypt & its bootloader (multi boot alongside Windows XP partition)?

    - by Rob
    I want Truecrypt to ask for password for Windows XP as usual but with the standard [ESC] option, on selecting that, i.e via Escape key, I want it to find the grub for the (unencrypted) Ubuntu install. I've installed Windows XP on the 120Gb hard drive of a Toshiba NB100 netbook then partitioned to make room for Ubuntu 10.04 and installed that after the Windows XP install. When I encrypt Windows XP, Truecrypt will overwrite the grub entry in the master boot record (MBR), I believe (?) and I won't be able to choose between XP and Ubuntu anymore. So I need to restore it back. I've searched fairly extensively for answers on Ubuntu forums and elsewhere but have not yet found a complete answer that covers all eventualities, scenarios and error messages, or otherwise they talk of legacy GRUB and not GRUB2. Ubuntu 10.04 uses GRUB2. My setup: Partitions: Windows XP, NTFS (to be encrypted with Truecrypt), 40Gb /boot (Ext4, 1Gb) Ubuntu swap, 4Gb Ubuntu / (root) - main filesystem (20gb) NTFS share, 55Gb I know that the Truecrypt boot loader replaces the GRUB when boot up because I've already tried it on another laptop. I want boot loader screen to look something like the usual: Truecrypt Enter password: (or [ESC] to skip) password is for WindowsXP and on pressing [ESC] for it to find the Ubuntu grub to boot from Thanks in advance for your help. The key area of the problem is how to instruct Truecrypt when escape key is pressed, and how the Grub/Ubuntu can be made visible to the truecrypt bootloader to find it, when the esc key is pressed. Also knowing as chaining.

    Read the article

  • Old network login passed to IIS

    - by 300 baud
    Let me start by saying that I am not a server guy - I am a developer. But I develop and manage an ASP.NET application that uses Windows authentication. I've run into the problem I am about to describe before, and I would just like to understand how to remedy it since I am the one who always gets the original support request. A user, let's call her JaneDoe, has just gotten married and her login has been changed to JaneJones. We have an application that uses Windows authentication to store the user's login name to a table and then redirects the user to another non-Windows authenticated site with a GUID which points to the table entry we just made. When the user reaches the second site, we read in the login name from the database using the GUID that was passed. Then, we look up the login name in another database where we track application permissions. The problem is that the user is logging into her workstation as JaneJones, but the Windows authenticated site is still receiving a login name of JaneDoe. Is this a domain controller issue? Is it a workstation issue? What's the best way to resolve this?

    Read the article

  • Server down at 23:26 every night

    - by miccet
    We are having a big problem with our sites stability the last couple of weeks and after endless hours of troubleshooting I don't get anywhere. So I turn to you dear community. Setup: 2 x VPS servers - Front end, 8 core, 8G RAM. - Database, 5 core, 3G RAM. Both running Ubuntu. Ruby on Rails EE with Passenger 3 and Rails 2.3.11. MySQL 5.1.67. The problem is that each night, at the exact same time (23:26) the SQL server suddenly shows a processlist full of COMMIT with an increasing Time. After 30-40 seconds (can go longer) a wave seems processed and the site responds for a few seconds before it repeats. During this hick up the database server load spikes while the front end is relaxing. I have looked at slow queries, but is not finding any locks or other unusual queries ran at this time. I have looked at iotop at the time of the halt and there is no activity from mysql. I also tried turning off query_cache and messed around with the mysql configuration file without much change. Any ideas?

    Read the article

  • Is is possible to guide installation of new programs using %ProgramFiles%? [closed]

    - by ??????? ???????????
    The purpose of this is to have the default "program files" (32 and 64 bit) folders located under an arbitrary path, possibly on a drive separate from where windows lives. Initially I thought that this may be done using a system environment variable through the dialog located under Control Panel - System - Advanced - Environment Variables. These variables turned out to be set in the registry under the key HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion. However, one particular entry is confusing. The ProgramFilesPath entry seems to point at an environment variable that is not defined under the same registry key. I could assume that the difference between ProgramFilesDir and ProgramFilesPath is none and that one of them exists as a backwards compatibility, but having some legitimate resource from Microsoft to look at would be better than guessing. After receiving some worrying feedback about having both 32 and 64bit applications in the same folder, I have decided not to ask about the feasibility of this to avoid discussion. The real question is if the desired effect is possible to attain by "cutting into" the windows setup process and modifying those registry entries as early as possible. These settings should be system wide and not only for software installed by a particular user. If this is indeed something that can be done, I wonder if there are any subtle pitfalls. Programs that expect libraries and other resources to be in default locations can probably be dealt with using the same technique as employed by Windows to re-map the "Documents and Settings" folders and the like (i.e. breaking legacy applications is not real concern).

    Read the article

  • MySQL has stopped accepting connections from other users than root

    - by John
    Hi there. I'm running a mysql-server a long with apache and tomcat on a Gentoo box. To administer mysql I'm using phpMyAdmin. A couple of hours ago I received a call -- a user was unable to login to phpmyadmin. I logged on to phpmyadmin with the root user, and reset the password. The user was still not able to login. I then decided to give it a go myself, and even I wasn't able to login. I tried creating several user accounts, none of them were able to access mysql via jdbc/mysql-client/phpmyadmin. The only user that seems to work is root. What's even more strange is that websites that connect to mysql with a user other than root are still able to login and retrieve content from the database (it's mainly wordpress and a tomcat webapp). I have made sure it's not just cached, I was able to post SQL queries to the database via these web apps still. However, I am unable to login to phpmyadmin/mysql-client with this user and I am also unable to set up a connection with this user for any new web-applications. Any help is immensely appreciated.

    Read the article

  • Installing Debian 7.6.0 on Lenovo Y50

    - by Girauder
    I was trying to install Debian on my new laptop: a Lenovo Y50 64bit running Windows 8. I got together with a friend and installed Debian in his computer first and had no problems. However I've tried to install Debian several times using the AMD64 KDE and netinst versions and accomplished nothing. First try: installed the KDE version. Grub would let me choose which operating system I wanted, but when I selected Debian it would only load the command line. Second try: Reinstalled but this time with the netinst version. I only got a black screen where I could type but nothing else. Third Try. Tried the netinst again. This time after making the partitions I got a message that said that no EFI partition was found. I ignored the message and this time it wouldn't even load Grub. only a command like interface with grub rescue or something. Not once did I get an error during the installation. What am I doing wrong? I assume the problem is I need to make an EFI partition or something like that. So why is it that during the first installations I didn't ask me for that. And if that is indeed the problem, How can I solve it? Update So the installation failed again... as predicted. Here you can find the Disk Management picture. http://postimg.org/image/433cpfkjz/ Please somebody help me. I keep getting the grub rescue thing. secure boot is disabled and legacy support is set first.

    Read the article

< Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >