Search Results

Search found 47408 results on 1897 pages for 'database machine'.

Page 320/1897 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • SQL SERVER FIX : ERROR : 4214 BACKUP LOG cannot be performed because there is no current database b

    I recently got following email from one of the reader.Hi Pinal,Even thought my database is in full recovery mode when I try to take log backup I am getting following error.BACKUP LOG cannot be performed because there is no current database backup. (Microsoft.SqlServer.Smo)How to fix it?Thanks,[name and email removed as requested]Solution / Fix:This error can [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • OK, I have my database ready, now what's missing?

    - by fatherjack
    During the life of any database there will be times when the development makes a change that breaks functionality of an object somewhere else in the database. SQL Server does a good job in some places of making this impossible, or at least really difficult, but in other places there isn't even a murmur as you execute a script that will bring your system processes out in a nasty plague of error messages. Where it works. If you try to create a view based on a table or column that doesn't...(read more)

    Read the article

  • L'IA au service des datacenters de Google, l'entreprise met sur pied une machine capable de construire des modèles prédictifs pour améliorer son PUE

    L'IA au service des datacenters de Google, l'entreprise met sur pied une machine capable de construire des modèles prédictifs pour améliorer son PUE Joe Kava, VP de la section responsable des data centers de Google, a expliqué que Mountain View a commencé à utiliser un réseau de neurones artificiels pour optimiser les opérations de traitement de données transitant sur ses serveurs mais également réduire encore plus la consommation d'énergie. Ces réseaux de neurones sont essentiellement des...

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 1 - Create a Database using Model-First Development

    This is the first in a series of articles that will develop an ASP.NET application that uses the Entity Framework 4. This article will demonstrate how to use Visual Studio 2010 and the Entity Framework 4 to create a database in a step-by-step approach. The database created in this article will then be used in future articles to demonstrate how to add, update, delete, and select records using the Entity Framework.

    Read the article

  • How much effort is SQL Server 2008 Administration?

    - by Adrian Grigore
    Hi, I am looking for a suitable hosting environment for an ASP.NET MVC application. One of the options I have is renting a Hyper-V server and installing my license of SQL Server 2008 on it. I'm a bit wary of shared hosting since the one I have tried so far did not seem to have very consistent performance. One potential problem is that I would have I do not not know much about SQL Server administration, so I am not sure if this is a good option. I've been running a failover cluster of two linux dedicated servers for over 5 years now and MySQL never gave me any trouble. But that was Linux, and it might be different with a windows system. Is running a halfway efficient MS SQL Server 2008 difficult? Does it require any in-depth administration knowledge? Or perhaps recurring administration effort (such as keeping the server up to date with the latest patches)? Or is it rather an "install and forget" experience similar to MySQL?

    Read the article

  • Initial deployment of ClickOnce product on corporate network

    - by MrEdmundo
    Hi there I'm a developer looking at introducing ClickOnce deployment for an internal .NET Winforms application that will be distributed via the corporate network. Currently the product roll out and updates are handled by Group Policy however I would like to control the updates via ClickOnce deployment now. What I would like to know is, how should I initially roll out the package to make sure that all users have got it. Can I use a combination of Group Policy (the roll out) and then rely on the ClickOnce deployment model for any further updates?

    Read the article

  • Recovering/Rebuilding MySQL .FRM, .MY* files from IBDATA1

    - by Synetech
    I recently had an incident in which several MySQL files were wiped out (mostly from WordPress, but also a few of MySQL’s own files). The IBDATA1 file is unaffected, but several .frm are gone as are a few .myi and .myd files. So now I need to find out if there is a way to rebuild the missing files from IBDATA1. I tried Googling it, assuming that such an issue has come up before, and indeed there were numerous search results (including this question), but all of the ones I looked at were the opposite, about recovering from .frm and .my* files or somehow required these files. Is there a way to rebuild these files? I know I have a relatively recent backup (a .SQL file) if there isn’t, but I’m hoping that these are the kind of files that are rebuilt if missing or outdated.

    Read the article

  • Cant connect to MySQL server from Java application

    - by RN
    This is on VPS\Centos server. The MySQL server is pre configured. I am running the Java application on Tomcat My Java web application is not able to connect to the MySQL server. I get an error - "Caused by: java.net.ConnectException: Connection refused" I suspect this to be a configuration problem rather than a coding problem- hence I have posted this on ServerFault And yes, The same web-app is able to connect to MySQL on a different linux box This is the URL that I provided to my Java application (note- it assumes default port) url = "jdbc:mysql://localhost/pickupgames" My first suspicion was that I am running on a non-default port So I tried to find the port where mySQL server is running I tried every trick mentioned in http://serverfault.com/questions/116100/how-to-check-what-port-mysql-is-running-on But no luck ! SHOW GLOBAL VARIABLES LIKE 'PORT'; This shows port 0 netstat -tlnp doesn't show mysql at all /etc/my.cnf It has no port entry telnet localhost 3306 Doesn't connect And in case you are wondering if mysql server is running at all or not It is And I know for sure, because I have been able to login using the mysql command Also # ps -ef|grep 'mysql' root 31839 27662 0 00:49 pts/3 00:00:00 grep mysql root 32452 1 0 Apr02 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --skip-grant-tables --skip-networking mysql 32504 32452 0 Apr02 ? 00:00:06 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock --skip-grant-tables --skip-networking Please note the --skip-networking parameter Does this have something to do with the issue ? Any explanation why I cant connect to mysql server on port 3306 by telnet? Or why it docent show up under netstat? Any suggestion on whet I should try next ?

    Read the article

  • PostgreSQL automatic backup

    - by Ragnar123
    I have been trying to set up a backup script on a windows server. I have used pgAgent (scheduling for pgAdmin), to run the backup script. No problems with the backup script. However, my jobs are not running like they should. I have set both the schedule, and the steps. I am fairly certain, that I am running the service under a wrong user or a user without the required permissions. I run the service like this: "C:\Program Files\pgAdmin III\pgAgent" INSTALL pgAgent -u postgres -p secret hostaddr=127.0.0.1 dbname=pgadmin user=postgres And I get an error, telling me that there was an error with the login information, though I know it's correct. When I go under services (controlpanel -- administration -- services), I am able to start the service with the local user. Can this be the problem? Where can I see or change the permissions on the postgres user?

    Read the article

  • PHP vs Batch file for mysql cronjob?

    - by mysqllearner
    Hi, My server details: OS: Windows Server 2003 IIS6 Plesk 8.xx installed (currently using Plesk to set the cronjob) I need your advice. I have 2 methods: Method 1: Using php + mysqldump, create databases backup files into gzip, and then send email with attachment (each databases has around about 25mb) Method 2: Using batch + mysqldump, create databases backup files into gzip, and then send email with attachment (same, each databases has around about 25mb) My questions: Whats the difference of using php file and batch file for cronjob? Which method is better in term of backup speed and send email, and (maybe)safety (e.g., lesser file corrupt occurance)? If i set the cronjob hourly, will it effect my web performances? I mean, lets say my website has 100++ users online now, and each user making transaction to MySQL, when I perform backup at my web peak hour, will it decrease the performances, like the loading speed, prone to errors etc?? (sorry for my bad english) P.S: If you need my php and batch file code, please ask me to post it here. I didnt post it now is because, its very simple and standard code.

    Read the article

  • Good Books About Scaling Up Databases/Servers/etc.?

    - by Mehrdad
    I've applied for an internship at a startup company that expects its user base to grow by a large factor in a small amount of time, and so part of their project is to scale everything up so that they're ready: handling more/larger requests efficiently, handling server failures, load balancing, getting more JavaScript to run faster on the client computers, etc. Part of my job will also be figuring out what to do, so it's not obvious what my exact task will be at the moment. I was told that I should start reading up a little more about this so that I would have a little bit of an idea of what to do. What are some good books for me to read on this topic? I have a little bit of experience with the usage of MySQL (and also a little experience with web development), but in no way do I claim any knowledge on the internal workings of databases or distributed systems, so I might need readings more on the introductory side.

    Read the article

  • MySQL and User level logging

    - by Adraen
    I have been looking at logging only certain users activity in MySQL. I found that the logging could be enabled or disabled for all users but one of the service using the db does a lot of queries and therefore I would like to only log specific users. Google told me that a flag can be SET to enable disable logging, however, I cannot modify the service DB connection code and asking every single user to enable logging before they do anything might not be as reliable as I want. So, do you know if there is any way to log only a set of users queries ? Thanks !

    Read the article

  • Which is faster? 4x10k SAS Drives in RAID 10 or 3x15k SAS Drives in RAID 5?

    - by Jenkz
    I am reviewing quote for a server upgrade. (RHEL). The server will have both Apache and MySQL on it, but the reason for upgrade is to increase DB performance. CPU has been upgraded massively, but I know that disk speed is also a factor. So RAID 10 is faster performance than RAID 5, but how much difference does the drive speed make? (The 15k discs in the RAID 5 config is at the top of my budget btw, hence not considdering 4x15k discs in RAID 10, which I assume would be the optimum.)

    Read the article

  • Can I setup a link SQL server connection between servers on different networks?

    - by Glenn Slaven
    We have a production SQL server hosted offsite at a hosting company, and we have a staging environment within our own network. We want to be able to setup a SQL job that copies content from a table on the staging server to prod on a regular basis, and I think we need to setup a linked server connection to do this. What do I need to get the hosting company to do to allow us to set this up? We have RDP access to the production servers, I just need to know what network and security configurations need to happen from the hosting company's perspective so I can ask them to do it.

    Read the article

  • MySQL: creating a user that can connect from multiple hosts

    - by DrStalker
    I'm using MySQL and I need to create an account that can connect from either the localhost of from another server, 10.1.1.1. So I am doing: CREATE USER 'bob'@'localhost' IDENTIFIED BY 'password123'; CREATE USER 'bob'@'10.1.1.1 IDENTIFIED BY 'password123'; GRANT SELECT, INSERT, UPDATE, DELETE on MyDatabse.* to 'bob'@'localhost', 'bob'@'10.1.1.1; This works fine, but is there any more elegant way to create a user account that is linked to multiple IPs or does it need to be done this way? My main worry is that in the future permissions will be updated form 'bob' account and not the other.

    Read the article

  • Google Maps mashup for notes/househunting

    - by afray
    I'm house-hunting at the moment and I'm trying to geek it out er I mean streamline the decision-making process. I'm currently using google maps's "my maps" feature to store pins to properties. I create one map per estate agent, then put relevant into into the individual pins. The idea being, I can look at the map and quickly choose which property to view next. However the pins don't currently link back to the map they're owned by, so you have to hunt a bit to get the estate agent info, it's a hassle to get all maps displayed in a new session if you have lots of agents, and each pin doesn't automatically show its bubble so you have to do lots of clicking to see all the info you want. I've tried Evernote, but despite its tag system initially showing some promise, I can't find a way to seemlessly integrate maps. A few google searches don't turn anything up either. Even the big sites, like http://www.rightmove.co.uk, don't seem to provide any maps integration by default. You can see an individual house's location, but not all results of a search. So is there a web site or windows program I could use to do something like this? Viewing all properties on a map is a must, as is quick access to contact details.

    Read the article

  • Alternative, more efficient scraping method for a noncoder, than Google doc's importxml and xpath?

    - by binarybunny
    I've searched throughout the net for a simple solution, but it seems everyone has their own unique method (coding language) of achieving this. I'm only just beginning to learn Linux, and my coding skills are thoroughly lacking (non-existent). I love the simplicity of using importxml and xpath, but copying and pasting values after reaching the spreadsheet limit of 50 is getting old. Now that I've seen the light, I would really just like to know of a simple, yet scalable solution to get more data into more spreadsheets/databases. Before I really start getting my hands dirty, I would love to know some of the ways you guys go about accomplishing this?

    Read the article

  • Best practices for SQL Server audit trail

    - by Ducain
    I'm facing a situation today where it would be very beneficial to me and my company if we knew who had logged into SQL and performed some deletions. We have a situation where at least 2 (sometimes 3) people login to SQL using SQL Server Management Studio, and perform various functions. What we need is an audit trail. If someone deletes records (mistakenly or otherwise), I'd like to know what was done. Is there any way to make this happen?

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • mysql replication 1x master, 1x slave

    - by clarkk
    I have just setup one master and one slave server, but its not working.. On my website I connect to the slave server and I insert some rows, but they do not appear on the master and vice versa.. What is wrong? This is what I did: Master: -> /etc/mysql/my.cnf [mysqld] log-bin = mysql-master-bin server-id=1 # bind-address = 127.0.0.1 binlog-do-db = test_db Slave: -> /etc/mysql/my.cnf [mysqld] log-bin = mysql-slave-bin server-id=2 # bind-address = 127.0.0.1 replicate-do-db = test_db Slave: terminal 0 > mysql> STOP SLAVE; // and drop tables Master: terminal 1 > mysql> CREATE USER 'repl_slave'@'slave_ip' IDENTIFIED BY 'repl_pass'; mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_slave'@'slave_ip'; mysql> FLUSH PRIVILEGES; mysql> FLUSH TABLES WITH READ LOCK; -- leave terminal open terminal 2 > shell> mysqldump -u root -pPASSWORD test_db --lock-all-tables > dump.sql mysql> SHOW MASTER STATUS; Slave: terminal 3 > shell> mysql -u root -pPASSWORD test_db < dump.sql terminal 0 > mysql> CHANGE MASTER TO mysql> MASTER_HOST='master_ip', mysql> MASTER_USER='repl_slave', mysql> MASTER_PASSWORD='repl_pass', mysql> MASTER_PORT=3306, mysql> MASTER_LOG_FILE='mysql-master-bin.000003', // terminal 2 > SHOW MASTER STATUS mysql> MASTER_LOG_POS=4, // terminal 2 > SHOW MASTER STATUS mysql> MASTER_CONNECT_RETRY=10; mysql> START SLAVE; mysql> SHOW SLAVE STATUS; Here is the slave status: Array ( [Slave_IO_State] => Waiting for master to send event [Master_Host] => xx.xx.xx.xx [Master_User] => repl_slave [Master_Port] => 3306 [Connect_Retry] => 10 [Master_Log_File] => mysql-master-bin.000003 [Read_Master_Log_Pos] => 106 [Relay_Log_File] => mysqld-relay-bin.000002 [Relay_Log_Pos] => 258 [Relay_Master_Log_File] => mysql-master-bin.000003 [Slave_IO_Running] => Yes [Slave_SQL_Running] => Yes [Replicate_Do_DB] => test_db [Replicate_Ignore_DB] => [Replicate_Do_Table] => [Replicate_Ignore_Table] => [Replicate_Wild_Do_Table] => [Replicate_Wild_Ignore_Table] => [Last_Errno] => 0 [Last_Error] => [Skip_Counter] => 0 [Exec_Master_Log_Pos] => 106 [Relay_Log_Space] => 414 [Until_Condition] => None [Until_Log_File] => [Until_Log_Pos] => 0 [Master_SSL_Allowed] => No [Master_SSL_CA_File] => [Master_SSL_CA_Path] => [Master_SSL_Cert] => [Master_SSL_Cipher] => [Master_SSL_Key] => [Seconds_Behind_Master] => 0 [Master_SSL_Verify_Server_Cert] => No [Last_IO_Errno] => 0 [Last_IO_Error] => [Last_SQL_Errno] => 0 [Last_SQL_Error] => )

    Read the article

  • Does multiple files in SQL Server when using RAID help reduce conflicts in growth and file-locking?

    - by Dr Giles M
    I've been reading around and get the impression that if you are using RAID then using multiple SQL Server files within a filegroup won't yeild any more improvements, and the benefits are purely administrative (if you started to run out of space or wanted to partition off data into managable chunks for backups/balancing the data around your big server room). However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller databases that SQL Server will perform growth and locking operations (for writes) on a LOGICAL file basis, so even if you are using RAID, it seems to make sense to have multiple files in a file group to balance I/O, or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • MySQL privileges - DROP tables, not databases

    - by Michal M
    Hi, Can someone help me with privileges here. I need to create a user that can DROP tables within databases but cannot DROP the databases? From what I understand from MySQL docs you cannot simply do this: The DROP privilege enables you to drop (remove) existing databases, tables, and views. Beginning with MySQL 5.1.10, the DROP privilege is also required in order to use the statement ALTER TABLE ... DROP PARTITION on a partitioned table. Beginning with MySQL 5.1.16, the DROP privilege is required for TRUNCATE TABLE (before that, TRUNCATE TABLE requires the DELETE privilege). Any ideas?

    Read the article

  • MySQL Server Is Slow

    - by user2853965746
    I have two MySQL servers and one was just recently setup. The one I just recently setup is a bit slower than my older one, which kind of bothers me because I don't want my clients to be upset with the speed difference when I launch the new one. The older server runs on Ubuntu (~13.04 I believe) and the new one is on Debian 6. Both servers are 2GB ram, but my newer server is has an SSD, so I thought it might be the same speed if not faster. Anyway, the speed difference isn't too much (both are still under a second, but still noticeable). Whenever I select 50 rows from the user table on my older server (SELECT * FROM users LIMIT 50), I get the results in 0.003 s. There is 100,000+ accounts in that table. Whenever running the same command on the same table with only six dev accounts, it takes 0.069 s. It may not seem like a lot, but it's noticeable when you're used to a fast response. I added skip-name-resolve to the config and it didn't seem to help. Basically I'm asking if anyone knows what can cause a MySQL server to be slow in Debian 6? Should I just drop it and switch to Ubuntu like the older server (I don't think the OS is the problem, but you never know)? The older server is under a lot of use too, it's used a lot for web api's on my website. A lot of connections and stuff, and it still remains fast.

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >