Search Results

Search found 2819 results on 113 pages for 'healthcare transaction ba'.

Page 70/113 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Update RDS db via mysqlbinlog: "you need (at least one of) the SUPER privilege(s)"

    - by timoxley
    We are moving a production site to EC2/RDS Followed these instructions: http://geehwan.posterous.com/moving-a-production-mysql-database-to-amazon I have set up row-based binary logging on the production server did a: mysqldump --single-transaction --master-data=2 -C -q -u root -p backup.sql then imported to RDS instance. No dramas. Due to the size of the db, and minimal downtime requirements, I've got to update the ec2 db to the latest datas via the binlogs, and it won't let me. mysqlbinlog mysql-bin.000004 --start-position=360812488 | mysql -uroot -p -h and it says: ERROR 1227 (42000) at line 6: Access denied; you need (at least one of) the SUPER privilege(s) for this operation My guess, based on what is on line 6 of the binlog, is that it's the 'write to the BINLOG' statements in the SQL backup, and because RDS doesn't support this, it can't run these statements, or something, I don't really know. Please help.

    Read the article

  • Reaver keeps reapeating the same PIN

    - by Umair Ayub
    I have been trying to Hack a WPA2 Wifi and so far I am stuck with it. Problem is that it keeps trying same pin over and over again. Here is the last REAVER command I entered. reaver -i mon0 -b 2C:AB:25:51:F1:CF -vv -c 1 -S -L -f It does this (only one PIN again and again) [+] Switching mon0 to channel 1 [+] Waiting for beacon from 2C:AB:25:51:F1:CF [+] Associated with 2C:AB:25:51:F1:CF (ESSID: PTCL-BB) [+] Trying pin 12345670 [+] Sending EAPOL START request [+] Received identity request [+] Sending identity response [!] WARNING: Receive timeout occurred [+] Sending WSC NACK [!] WPS transaction failed (code: 0x02), re-trying last pin [+] Trying pin 12345670 [+] Sending EAPOL START request [+] Received identity request [+] Sending identity response [+] Received identity request [+] Sending identity response ^C [+] Nothing done, nothing to save.

    Read the article

  • SQL Server 2005, Sudden increase of connections - SharePoint 2007

    - by CrazyNick
    We observed that sudden increase of SQL connections during a specific hour, it is a backend of a SharePoint 2007 Farm. From SharePoint 2007 Perspective: 1. Incremental crawling is scheduled at that time and few of the Timer jobs (normal timer jobs) are scheduled to run every mins / per 10mins. 2. Number of user requests are less. From SQL Server 2005 Perspective: 1. Transaction log backup is scheduled at that time 2. No other scheduled jobs are running at that time. so, how to narrow down the issue, what would be causing the sudden SQL connection increase?

    Read the article

  • SQL Server 2005, Sudden increase of connections - SharePoint 2007

    - by CrazyNick
    We observed that sudden increase of SQL connections during a specific hour, it is a backend of a SharePoint 2007 Farm. From SharePoint 2007 Perspective: 1. Incremental crawling is scheduled at that time and few of the Timer jobs (normal timer jobs) are scheduled to run every mins / per 10mins. 2. Number of user requests are less. From SQL Server 2005 Perspective: 1. Transaction log backup is scheduled at that time 2. No other scheduled jobs are running at that time. so, how to narrow down the issue, what would be causing the sudden SQL connection increase?

    Read the article

  • Is there a way to add AD LDS users to an AD Domain Group or allow them domain security rights?

    - by Tom
    I have a web application in which our outside customers need access to run transactions (stored procs on Sql Server) on our domain. We have looked into LDS to keep these users separate from our domain. The problem we are having is allowing the LDS users the AD security rights to access these stored procs. For administration purposes we would like to use an AD group for each transaction (stored proc) which has access to execute. Is there a way to add LDS users to this AD group or allow them the security rights to do this? We have setup LDS and can authenicate an AD user thru to runs these transactions. LDS is running on Server 08 R2. AD is also Server 08 R2. Thanks.

    Read the article

  • Do MSDTC and disaster recovery go together?

    - by DevDelivery
    Our application writes to multiple Sql Server databases within a distributed transaction. The Ops guys are saying that this messes up their disaster recovery plan because while the transactions on the live tables may commit at the same time, the log shipping on the separate databases happen at slightly different times. So in in a disaster recovery situation, there will be a few partial transactions. Is there a method for maintaining separate but synced databases in DR? Or do we have to re-design to relatively independent databases (or a single database)?

    Read the article

  • SQL Server replication - Log Reader Agent Read Latency Issue, Please help

    - by envykok
    Hi all, I am facing one transactional replication delay issue on log reader agent. The log reader output is : ********* STATISTICS SINCE AGENT STARTED ************** 02-28-2011 20:12:08 Execution time (ms): 304141 Work time (ms): 304016 Distribute Repl Cmds Time(ms): 303764 Fetch time(ms): 300813 Repldone time(ms): 1826 Write time(ms): 5319 Num Trans: 15500 Num Trans/Sec: 50.984159 Num Cmds: 191639 Num Cmds/Sec: 630.358271 It seems Log Reader Reader-Thread Latency, and I also run 'sp_replcounters' and see more than 20,000 sec replication latency and keep on increasing. I used SQL profiler to monitor sp_replcmds and found sp_replcmds execution time was 11 sec to 15 sec Is it there any way to optimize to make Log Reader read faster from transaction log??? Other information: SQL Server 2008 (SP2) Standard 64 bit

    Read the article

  • PHP vs Batch file for mysql cronjob?

    - by mysqllearner
    Hi, My server details: OS: Windows Server 2003 IIS6 Plesk 8.xx installed (currently using Plesk to set the cronjob) I need your advice. I have 2 methods: Method 1: Using php + mysqldump, create databases backup files into gzip, and then send email with attachment (each databases has around about 25mb) Method 2: Using batch + mysqldump, create databases backup files into gzip, and then send email with attachment (same, each databases has around about 25mb) My questions: Whats the difference of using php file and batch file for cronjob? Which method is better in term of backup speed and send email, and (maybe)safety (e.g., lesser file corrupt occurance)? If i set the cronjob hourly, will it effect my web performances? I mean, lets say my website has 100++ users online now, and each user making transaction to MySQL, when I perform backup at my web peak hour, will it decrease the performances, like the loading speed, prone to errors etc?? (sorry for my bad english) P.S: If you need my php and batch file code, please ask me to post it here. I didnt post it now is because, its very simple and standard code.

    Read the article

  • Run a MongoDB configuration server without 3GB of journal files

    - by Thilo
    For a production sharded MongoDB installation we need 3 configuration servers. According to the documentation "the config server mongod process is fairly lightweight and can be ran on machines performing other work". However, in the default configuration, they all have journalling enabled, and with preallocation this takes up 3 GB of disk space. I assume that the actual data and transaction volume of a config server is quite small, so that this seems a bit too much. Is there a way to (safely!) run these config servers with much less disk use for the journal? Do I need journalling at all on config servers? Can I set the journal size to be smaller?

    Read the article

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Automatically keeping two excel data tables in-sync (w/out VBA)

    - by Neil
    I'm putting together a workbook for tracking a stock portfolio. The primary sheet contains a table with the list of the transactions. From this I would like to create an overview table on another sheet with only one row per unique stock symbol that includes things like cost basis, returns, etc. The problem is that nothing I've tried updates the overview table correctly when rows are added to the transaction table. The closest I've got is something like the following: http://www.get-digital-help.com/2009/04/14/create-a-unique-alphabetically-sorted-list-extracted-from-a-column/ However, this requires applying that formula to every cell in the primary column of the overview sheet. And even then the range of the table isn't extended down to include new rows as they become valid. Essentially I'm looking for a way that auto-adds rows to a table and copies the formula based on a different table changing without using VBA. Trivial example data Sheet1 Symbol Type Shares Price F Buy 100 12 MSFT Buy 100 25 MSFT Sell 50 28 F Buy 100 16 Sheet2 Symbol Quantity F 200 MSFT 50

    Read the article

  • Could not retrieve backup settings for primary ID in Log shipping

    - by user1723139
    I am doing log shipping between two Amazon EC2 instances running Windows Server 2008 R2 with SQL Server 2008 R2 standard edition. Both the instances are in the same domain and I can access the shared folders between the instances. The SQL server service account, agent service account are all running under a domain account. When I activate log shipping (with stand by mode restore in secondary server), the initial backup gets restored on the secondary. After that the backup operation is getting failed and i get the following error message: *** Error: Could not retrieve backup settings for primary ID 'xxxxxx-xxxx-xxxx-xxxx-4d772cd7337e'.(Microsoft.SqlServer.Management.LogShipping) *** *** Error: Failed to connect to server IP-0A7653F2.(Microsoft.SqlServer.ConnectionInfo) *** ****** Error: A network-related or instance-specific error occurred while establishing a connection to SQL Server.******** **The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)(.Net SqlClient Data Provider) *** **----- END OF TRANSACTION LOG BACKUP -----**** Any ideas?

    Read the article

  • Slow IE8 Start-up due to LDAP DNS queries

    - by MikeJ-UK
    Recently (in the last few days), my installation of IE8 has been taking 15 to 20 seconds to load my home page. Specifically, the sequence of events (as reported by WireShark) is:- Browser issues a DNS A query to resolve the home page server's IP address. Browser then spends the next 15-20 seconds broadcasting DNS SRV _LDAP._TCP queries, (roughly on a 2 second tick) to which it receives no answer (we have no LDAP servers). Browser re-issues the DNS A query and resolves the server's IP address again. Finally, the browser issues an HTTP GET for the home page. Does anyone know why this is happening? Possibly related to this question EDIT: @Massimo, LDAP query is :- Domain Name System (query) Transaction ID: 0x11c5 Flags: 0x0100 (Standard query) Questions: 1 Answer RRS: 0 Authority RRS: 0 Additional RRS: 0 Queries _LDAP._TCP: type SRV, class IN Name: _LDAP._TCP Type: SRV (Service location) Class: IN (0x0001)

    Read the article

  • Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

    - by Imagineer
    I'm getting the above mentioned error when backing up with ZRM, which is using mysqldump for backup. mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --user=" " -p --all-databases "/nfs/backup/mysql01/dailyrun/20091216043001/backup.sql" mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT at row: 2286 I have increased the size for 'max_allowed_packet' to be 1G in /etc/my.cnf which is the server setting and for the client side setting I've set it by running this command: mysql -u -p --max_allowed_packet=1G And I have verified that on the client and server side they are of the same value. This is to check the client side value according to this forum posting http://forums.mysql.com/read.php?35,75794,261640 mysql SELECT @@MAX_ALLOWED_PACKET - ; +----------------------+ | @@MAX_ALLOWED_PACKET | +----------------------+ | 1073741824 | +----------------------+ 1 row in set (0.00 sec) And this is the check the server value setting. mysql SHOW VARIABLES | max_allowed_packet | 1073741824 | I have ran out of ideas, and tried searching within expert exchange and googling for solutions but so far none has worked. Reference http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html Anyone please advise, thank you.

    Read the article

  • Concurrent backups in SQL Server?

    - by Mikey Cee
    We currently have our backups managed by a third party company. There are a bunch of agent jobs created that take full backups (4 times a day) and transaction log backups (4 times an hour). We now want to manage our backups in house, but don't want to disable the third party's jobs until we are sure that we have everything configured correctly internally So I am proposing to have a short period (say, a couple of days) where backups are being taken both by the old and the new system. I am wondering what the ramifications of having these two different systems both manage backups, and the potential pitfalls of having backups taken simultaneously. Is this even supported? If so, and bearing in mind that the system can cope with one backup without any noticeable performance degradation, is it fairly logical to assume that it should be able to cope with two simultaneous backups? Currently the load on the server is fairly light and it rarely struggles. Any advice is appreciated

    Read the article

  • MSSQL:Ultimate configuration for rebuilding indexes+statistics

    - by Niels Bosma
    My database is growing slower even though I have a bunch of indexes setup. Yesterday I figured out that I need to setup a maintenance plan to build the indexes etc. So my question is what's the ultimate configuration for this? Do I need All: "Rebuild idex task", "Reorganize index task" and "update statistics task". Anything else I need to setup. Shrink database? (Today, the only maintenance plans I have is backup) Does it matter in what order I run them? Any configuration options I should be aware of? I've read of problems with log growing wild, how do I fix that? My transaction log is quite small and is usually a problem for me. -

    Read the article

  • Changing Recovery Model in Replicated Database

    - by Rob
    I now am the proud owner of two servers that replicate with each other. I had nothing to do with the install, but (of course), now i have to support the databases. Both databases are in the Simple recovery model, but the users want to ensure as little data loss as possible so I'm thinking that I should change the recovery model over to full and start doing transaction log backups. I wasn't planning on backing up the subscribing database, only the publisher. Is this the right plan? Do I need to switch both the Subscriber and and the publisher to Full, or can I leave the subscriber in Simple, but have the Publisher in Full? When I change the recovery model in one (or both) do the databases need to be offline? Thanks

    Read the article

  • How to Shrink/Reset File stderr1 on a SAP System?

    - by Techboy
    I have a file called stderr1 in the work directory of several of the SAP servers in my production cluster. It has grown to around 19GB's to fill the hard disk on each server. I have deleted all trace files and WP files from within transaction SM50 but that hasn't deleted it (or re-named it to .old). If I try to rename or delete it manually, it says I can't because the file is in use. Please can you tell me how I can delete or shrink the stderr1 file?

    Read the article

  • MSSQL Backup Question

    - by MJ
    I'm currently taking over for someone who was in charge of backing up over 250 servers on different platforms, until we hire a replacement. The main question I have is: If we use a backup software, such as Symantec backup exec, does this perform the correct backup for MSSQL Server? I was listening to Stack Overflow Podcast, and I heard them talk about you cannot just backup the SQL data files, but you also need the transaction log? So, if we just backup the whole machine, would we be able to recover it correctly, since we would be backing up the data file and the log? Thanks!

    Read the article

  • WS-AT Issue between WPS 6.2 and WAS 7.0

    - by AK
    Hi, I have a BPEL running on WPS 6.2 trying to call a web service on developed on RAD 7.5, deployed on RAD test environment. I have setup WS Transaction policy on both client and server. I get an error on WAS 7.0 saying Must Understand check failed for headers: {http:// schemas.xmlsoap.org/ws/2004/10/wscoor}CoordinationContext I tried to generate the same webservice on ibm wid 6.2 and deployed on EAR on WAS 7, it works perfect. Any thoughts ? Is there a SOAP runtime mismatch ? Help appreciated . -AK

    Read the article

  • Database checksum features - redundant? useful?

    - by Eloff
    Just about every mainstream DB has a feature to calculate checksums per page, per sector, or per record. Now for a DB that does full recover after any crash, like PostgreSQL, is a checksum even useful? There will be no data loss as long as the xlog is ok, no matter what kind of corruption happened to the data itself, as the redo log is replayed every committed transaction will be restored. So checksums are useless on restore. Doesn't the filesystem or disk keep checksums anyway to detect corruption? So unless the checksum is per record, all it does is tell you there is corruption - which the OS should be yelling at you the minute you try to read it - so useless in operation? I can't imagine how a checksum can be helpful in any sane database - but since they all use them - I'd say that's just failure of imagination on my part. So how is it useful?

    Read the article

  • SQL 2005 Log Shipping - Was working, now isnt!

    - by Jim
    Hello, I had log shipping working between two SQL 2005 server fine. I suspect that a job was added to the source server which backed up the transaction log to disk (nothing to do with the existing log shipping job). As I understand it, if you do this then log shipping will fail to work. Sure enough, it no longer works. I've deleted the job which had just been created. Log shipping still does not work. I've rebooted both servers and, again, Log shipping does not work. I'm at a loss now... all I get is the folloing error: The log shipping secondary database XXXXXXXXXX has restore threshold of 45 minutes and is out of sync. No restore was performed for 5882 minutes. Restored latency is 15 minutes. Check agent log and logshipping monitor information. Any help appreciated! Thanks in advance.

    Read the article

  • Excel - Linking multiple source spreadsheets with variable amounts of rows to a destination spreadsh

    - by Emilio
    I have multiple source spreadsheets, each with a variable number of rows. An example might be one spreadsheet per bank account, with one row for each transaction, with a date and amount. One spreadsheet might have 30 rows, the other 50, and so on. I want to create another spreadsheet which links to the various source spreadsheets and lists an aggregate of all transactions from all sources. So if 3 source sheets with 30, 50 and 20 rows respectively, the destination sheet would have 100 rows. The number of rows (transactions) in the source sheets can grow or shrink over time. I'd like the destination sheet to show one contiguous list of transactions without gaps (spaces). How can I do this?

    Read the article

  • Best solution for High Availability and SSRS on SQL Server 2008 R2?

    - by Chandra
    I have 2 Physical Servers with SQL Server 2008 R2. – SQL Server 1(Active) & SQL Server 2 (Passive) Web Application is developed using .Net 4.0 Framework. I want to know the best solution to have high availability and also have SSRS for reporting. Planned solution: Mirroring for Failover, and Transaction Replication for SSRS as the mirrored database can only be used for failover scenarios. SSRS will be on the Passive server, to reduce the load on the Active server. Let me know if the solution is correct. Also suggest alternate approaches.

    Read the article

  • Correct password for ssh key rejected when ssh-d into machine

    - by user20342
    When I am logged into my machine directly, I can do all git operations, and when prompted for a password, the password is accepted. When I ssh into the same box and run git operations on the same repos, the password is rejected. Relevant section of .ssh/config looks like this: # Generic settings Host * ServerAliveInterval 600 ControlPath /tmp/ssh-%r@%h:%p ControlMaster auto KeepAlive yes IdentityFile ~/.ssh/id_rsa.pub Transaction looks like this when I login when I ssh into my box: {12-12-03 9:41}hbrown-wks2:~/workspace/spt/project@master??? hbrown% git pull Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Using bash does not appear to make a difference (i.e. ssh-agent /bin/bash). This is a recent development, but I can't cite the change that caused it.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >