Search Results

Search found 54131 results on 2166 pages for 'database project'.

Page 745/2166 | < Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • Sql Server 2008 Create Foreign Key Manually

    - by tgriffiths
    I have inherited an old database which wasn't designed very well. It is a Sql Server 2008 database which is missing quite a lot of Foreign Key relationships. Below shows two of the tables, and I am trying to manually create a FK relationship between dbo.app_status.status_id and dbo.app_additional_info.application_id I am using SQL Server Management Studio when trying to create the relationship using the query below USE myDatabase; GO ALTER TABLE dbo.app_additional_info ADD CONSTRAINT FK_AddInfo_AppStatus FOREIGN KEY (application_id) REFERENCES dbo.app_status (status_id) ON DELETE CASCADE ON UPDATE CASCADE ; GO However, I receive this error when I run the query The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_AddInfo_AppStatus". The conflict occurred in database "myDatabase", table "dbo.app_status", column 'status_id'. I am wondering if the query is failing because each table already contains approximately 130,000 records? Please help. Thanks.

    Read the article

  • Deploying a Django application in a virtual Ubuntu Server

    - by mfsaint
    I have a virtualbox machine running Ubuntu Server 10.04LTS. My intention is to this machine to work like a VPS, this way I can learn and prepare for when I get a VPS service. Apache+mod_wsgi for deploying the Django app seems the right choice to me. I have the domain (marianofalcon.com.ar) but nothing else, no DNS. The problem is that I'm pretty lost with all the deployment stuff. I know how to configure mod_wsgi(with the django.wsgi file) and apache(creating a VirtualHost). Something is missing and I don't know what it is. I think that I lack networking skills ant that's the big problem. Trying to host the app on a virtualbox adds some difficulty because I don't know well what IP to use. This is what I've got: file placed at: /etc/apache2/sites-available: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.my-domain.com ServerAlias my-domain.com Alias /media /path/to/my/project/media DocumentRoot /path/to/my/project WSGIScriptAlias / /path/to/your/project/apache/django.wsgi ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> django.wsgi file: import os, sys wsgi_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.dirname(wsgi_dir) sys.path.append(project_dir) project_settings = os.path.join(project_dir,'settings') os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • Is it faster to create indexes before or after data loading in MySQL?

    - by Josh Glover
    I have a data replication process that drops and recreates a few tables in a target database, then loads them up with data from a source database (running on another host, but that is immaterial to the question at hand). The target database does need primary keys and a few other indexes on its tables, but not during the data loading. I'm currently loading all of the data, then creating the indexes. However, index creation takes a pretty long time--30 minutes of my data loader's 5 and a half hour running time. My intuition tells me that creating the indexes at the end should be faster than creating them first, since the index would need to be rewritten with each insert. Can anyone tell me for sure which way is faster? FWIW, I'm running MySQL 5.1 with InnoDB tables.

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • Using Postgres on Volusion site

    - by Sean
    Okay, I apologize if this is so basic that I should know the answer, but I'm not sure where else to go for the solution. I would like to start a small store site using Volusion. I would like some custom ASP code to query data that I currently have in a Postgres database. I would like to be able to just move the database file(s) onto the Volusion server via ftp and access them from my store site (via the custom ASP). Do I need to install Postgres onto the server to do this, or can I just ftp my database file(s) and access them with the ASP code? I think I need to install Postgres, but would like to do this without such an installation if possible.

    Read the article

  • Moving FederatedEmail/SystemMailbox from One Store to Another - Exchange 2010

    - by ThaKidd
    Hello all. Just upgraded from Exchange 2003 to 2010. Somehow, I have two mailbox databases on my single Exchange 2010 server. One database contains all of the mailboxes I had moved from the 2003 exchange server; the other contains two SystemMailboxes and one FederatedEmail box. I am just starting to get a grasp on the commands used in the EMS. I was wondering if someone could point me in the right direction to move these three "system" mailboxes into my actual mailbox database so I can eliminate the second database. Just trying to sure up this one server before I role out my backup Exchange server. Thanks in advance! Your help and ideas are greatly appreciated as I try to make this setup as simple as possible.

    Read the article

  • Replacement for public folder workflow, I'm confused as to how sharepoint does it.

    - by RodH257
    For years Microsoft has been slowly phasing out public folders, perhaps exchange 2010 really is the LAST TIME they'll be shipped... I've heard sharepoint is the replacement, but I don't understand full, can someone give me an idea of how to replace this workflow? In our office, we have projects, they have a project number, ie 10353. Each job folder has a public folder, organized in a hierachy like Projects Year Folder Subfolders The main subfolder we use is for genera correspondence. When an email is received that relates to a project, it is dragged and dropped (or right click move to) a public folder. Adding public folder favourites for each user helps this. When an email is sent, we have a custom email form, which is the default email form, but with a project number field next to the subject line. When you enter the job number in there, it carbon copies our filing system in, which reads the job number and puts the email in the public folder for you. if you need to refer to emails, you go to public folder and find them there. This isn't the best with large jobs, but it works ok. Now, I have limited experience with sharepoint (well, WSS), we've used it to do some neat discussion boards/polls etc as an intranet site, but I haven't seen much of its integration with outlook. The great thing about our solution is how tightly it integrates with outlook which is exactly where the emails are. If you want to forward an old email, you go to public folder and forward it, simple. Any solution that replaces it should be at least as easy as this. Improvements we would like would be to have better searching of emails, better support in exchange (ie future version) and also, custom forms in outlook are being phased out (the VBA kind), so avoiding these would be good. Does sharepoint do this? or what solutions do this kind of thing?

    Read the article

  • Samba users not added untill they logon first? Edit: How do I add users to tdbsam without a password prompt?

    - by glisignoli
    I add users to my server with the command useradd -m -p PASS_HASH -s /usr/sbin/nologin USERNAME Then I try to access their samba home share, but it never shows up until I login with the user: root:~$sudo login failtest Password:###### Added user failtest. Is there some way of added the user without logging in? Edit: The problem is that the user is added with the useradd command, but ubuntu seems to run an initalisation script when the user logs on for the first time. This script then adds that user to the tdbsam user database. Finding the initalisation script or the method it uses to add a user to the tdbsam database without requiring any user input (as smbpasswd -a USER prompts the user for a password). So all I need is a way to add a user+pass to the tdbsam database without prompting a user for a password (eg: samaba-add-user.sh USERNAME PASSWORD).

    Read the article

  • Best practice for ONLY allowing MySQL access to a server?

    - by Calvin Froedge
    Here's the use case: I have a SaaS system that was built (dev environment) on a single box. I've moved everything to a cloud environment running Ubuntu 10.10. One server runs the application, the other runs the database. The basic idea is that the server that runs the database should only be accessible by the application and the administrator's machine, who both have correct RSA keys. My question: Would it be better practice to use a firewall to block access to ALL ports except MySQL, or skip firewall / iptables and just disable all other services / ports completely? Furthermore, should I run MySQL on a non-standard port? This database will hold quite sensitive information and I want to make sure I'm doing everything possible to properly safeguard it. Thanks in advance. I've been reading here for a while but this is the first question that I've asked. I'll try to answer some as well = )

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • Keep Uploaded Files in Sync Across Multiple Servers - LAMP

    - by Dfranc3373
    I have a website right now that is currently utilizing 2 servers, a application server and a database server, however the load on the application server is increasing so we are going to add a second application server. The problem I have is that the website has users upload files to the server. How do I get the uploaded files on both of the servers? I do not want to store images directly in a database as our application is database intensive already. Is there a way to sync the servers across each other or is there something else I can do? Any help would be appreciated. Thanks

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • How to continue an HTTrack mirroring session from the command line?

    - by isme
    I want to drive my mirroring project using the Command Prompt instead of the WinHTTrack interface so that I can script and schedule the mirroring session more easily. The output of httrack --help gives a simple command for continuing an interrupted mirroring session: example: httrack --continue continues a mirror in the current folder When I try httrack --continue in my HTTrack project folder, all I get is output like this: Example: -%F "<!-- Mirrored from %s by HTTrack Website Copier/3.x [XR&CO'2010], %s -->" * Option %F needs to be followed by a blank space, and a footer string With each parameter on a new line for readability, the first line of my doit.log file looks like this: -qiC1%P0s0b0u1j0%s%u0N0%I0p1DaK0c1T30H0%kf2E1800A25000%c0.1%f#f -F "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" -%F "" -%l "en, en, *" http://saa.gov.uk/search.php?SEARCHED=1&SEARCH_TABLE=council_tax&SEARCH_TERM=City+of+Edinburgh&DISPLAY_COUNT=100 -O1 "C:\\Users\\Iain\\Projects\\Council Tax Analysis\\Code\\HTTrack\\Council Tax Valuation List" -* \ +*search.php?SEARCHED=1* -*DISPLAY_MODE=FULL* The parameter %F "" should tell HTTrack to use an empty footer. I used the WinHTTrack interface to create the project and start the mirroring session. I can interrupt and continue the mirroring session using the interface. The HTML files saved by WinHTTrack have no footer.

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • Lotus notes 8.5 quota

    - by Cividan
    we're using lotus notes 8.5 and I have a user who was over his quota as he had sent 6 email with attachement over 800 MB (no comment...) I deleted these oversized email and empty the trash but domino keep sending email warning about quota. I checked in the all documents view and they are no longer there, I re-did an empty the trash. I saw a post on the internet saying to compact his database, when I go under file, application, properties and click on the info tab, I see that he use 35.7% of the 3 GB database. when I click on "compact" I see a message saying the compact of the database is beeing process... the message disapear after about 1 minutes the message disapear but nothing else seem to happen and when I look back later on the space problem has not changed. any advice would be appreciated.

    Read the article

  • MySQL gzipped Export in PhpMyAdmin has wrong size in Mozilla

    - by Michal Gow
    That is really strange. I am using PhpMyAdmin 2.11.9.6 on Linux hosting. While I am Exporting databases using "gzipped" compression in Mozilla, I am getting files which have size of uncompressed database, but they seems to be downloading in incredible speed (10 times quicker than is possible using my ISP). So at the end: for database of 10M size I am getting 10M gzip downloaded in miniseconds it has indeed shown 10M size on drive it is corrupted Zip compression is working just fine (I am getting file with cca 1M size with fine content of compressed database) And the weirdest thing: that is happening for Mozilla Firefox (13.0.1) only, Internet Explorer 9 is downloading correct gzipped files... Any hint?

    Read the article

  • Is the sql backend right choice for LDAP?

    - by skomak
    Hi, I have felt some troubles with LDAP dif database after unexpected system reboots. This databse was only read so it is confused why database have had errors. So im searching for replacement of this database. I think SQL would be more reliable. What do you think, is it? I need to know how much performance loss i'll meet then. How many more IOPS(I/O per second) in percentage I loss too. Thanks in advance, skomak

    Read the article

  • Connecting SQL 2005 to Oracle 10g

    - by Lorn
    Environment: - Oracle 10g database over a windows 32bit 2003 server - SQL 2005 database over a windows 32 bit 2003 server. I am trying to connect the above databases through heterogeneous services. I have updated the following files: TNSNames.ora, Listener.ora and hs.ora. When performing a test connection from SQL developer, I get the following error - ORA 28500 - indicating that the login for SA user is incorrect. I also tried using another authenticated user that has rights to the database. I can successfully connect with SQL 2000. Has anyone experienced such a problem before?

    Read the article

  • Mediawiki create user error after migration

    - by ing0
    So I had a mediawiki installed on windows with MySQL (running on AWS RDS). I've since moved it to a debian server for various reasons, but I think I've messed up the database because of the different versions of mediawiki I have used. The windows install was v 1.20alpha (58f390e). The new debian install is v 1.15.5-2squeeze4. I've tried to update debian but it doesn't find an update, so is this the latest squeeze version? Everything seems to work ok except adding users. It gives me a database error so I ran php maintenance/update.php which ran some stuff OK but didn't make a difference. I think I've not done the correct approach to this sort of move, does anyone know of a better way of doing it? I still have the old wiki running - but not used - on windows (using the same database) so I could always try this again.

    Read the article

  • SQL Server 2005/2008 Licensing Decision

    - by Hakim
    Hello, I have purchased a dedicated server from a reputable Hosting company. They only have Windows Server 2008 OS installed on it and NO Sql server. Server Configuration is Intel Dual core Processor with 2GB of RAM and 100GB HDD. I wanted to host my web services on that server which will be using the MS SQL Server 2005 at the backend.There are multiple web services and each using a different Database. Microsoft has CAL basis Licensing , Which I understand is based on number of users accessing the database directly ( I may be wrong ) . But my users will be accessing the webservice and no direct connection to the database as such. Yes but the number of users accessing the web server cannot be known and is not under my control. Which Licensing is best suited for this kind of setup ? I don't need analysing and BI services right now ,but i may want to upgrade that in future may be. Any help will be appreciated. Thanks

    Read the article

  • When to use MySQL replication or DRBD for HA on Xen VM?

    - by user62513
    I'm setting up a database which needs to be needs to provide High Availabilty. My primary concern is high performance and robustness (I don't want something that will fail fast and badly). The database is accessed by the application at an average of 300 qps. It's will run on Xen VMs and it has some InnoDB tables as well as MyISAM tables. The VMs are connected via ethernet 100Mbit/s ethernet cables. Which of the two - MySQL replication or DRBD - would you recommend in such a situation? Or should I use DRBD to make the master database Highly Available and use MySQL replication on the slaves? I'm a developer so these things are all not so easy for me to make a sound judgement.

    Read the article

  • Migrating Magento Concern

    - by Pankaj Upadhyay
    We have a Magento 1.5.0.1 store running at a hosting provider. Now, we need to migrate the same from that server to a new hosting provider. I had talk with a technical guy from the new hosting provider who told me to do following things. Go into the cPanel Backup Wizard . Make a FULL BACKUP and download the zip file Then upload that zip file on their server in my root folder. Then tell them and they will do the restore. My Concern :- Will everything work as expected. What about the connectionstrings and database and all. Will database be automatically created and work the same. Also, somewhere I read that ver 1.5.0.1 used older type of database which might not work on new MySQLs. Can this too have any impact. Should i proceed in the same manner or I need to take care of some additional things to ensure smooth running.

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • Solaris 10: Identify a PID and the CPU it's running on

    - by Marcus
    I have multiple instances of a database running on a Solaris system. I'd like to prove that each database process is being handled by a different CPU. Essentially, I want to be able to do something like a ps -ef | grep <process_name> to get the PIDs and then run another command (if required) to identify the CPU... Is prstat able to do this? I'm making an assumption that as each database instance is started each one uses a different CPU. I'm not sure if I'm understanding this correctly... The reason I want to do this is because Sun hardware has slow CPU's, but lots of them. Therefore, to get the best performance out of it, I need to try and spread the load among CPU's... Thanks

    Read the article

< Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >