Search Results

Search found 10042 results on 402 pages for 'orient db'.

Page 231/402 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • innodb recovery from .ibd files

    - by mr heLL
    My website has crashed a few days ago. The hosting company says some innodb database crashed. They sent a MySql data folder. I tried to restore the database, but phpmyadmin is only showing MyISAM tables. I checked the database with navicat. When I click innodb table, I got this error table 'xyz.wp_posts' doesn't exist. is there anyway to fix this on windows? Feel free to download db: www.degisimanaliz.com/xyzdb.tar.gz Very old backup: www.degisimanaliz.com/29_Ocak_Yedek_deganaliz.sql.gz

    Read the article

  • How to backup/restore excluding filestream varbinary in SQL Server 2008?

    - by fdierre
    There is an application used in a production site that uses SQL Server 2008 as its DBMS. The database schema uses Filestream Varbinary to save binary data on the filesystem instead of directly into the DB tables. The point is that now and then it would be useful to copy the production database on development machines, mostly for doing troubleshooting. The database is too big for comfortably moving it around, but it would be ok if it could be moved leaving out the filestream varbinary fields. In other words, I am trying to make an "imperfect" copy of a database: i.e., on the destination database, it is ok to have NULL values instead of the varbinary. Is this possible? I tried looking for the feature on the SQL Server Management studio and did a backup that excludes the filegroup containing the filestream varbinary, but I cannot restore: MSSMS complains that the restore cannot be done because the backup is incomplete (of course). Is it possible to achieve what I am trying to do in some way?

    Read the article

  • Using Performance Monitor To Get IIS7 Response Turnaround Time

    - by alphadogg
    I have a MVC2 web application on W2KR2/IIS7 that I'd like to benchmark/monitor. Some XHR requests by a browser-based client are suddenly taking 8-10 sec when they used to take much less time (as per Chrome Developer Tool timings). The underlying SQL Server queries, using the same params, runs in 1.4s according to total execution time client statistics from SSMS. I'm assuming that there are various counters that can specifically dissect time taken/waiting/processing between IIS7 itself and the web application? For example, can I check how long it takes to get a response from IIS7 app and DB? How about how long it takes to serve IIS7?

    Read the article

  • What is the meaning of those numbers in the second column after typing "ls -l"?

    - by Nick Dong
    drwxr-xr-x. 2 root root 4096 Jun 29 16:44 db drwxr-xr-x. 2 root root 4096 Jun 29 16:44 djproject -rwxr-xr-x. 1 root root 38 Jun 29 16:44 index.html drwxr-xr-x. 2 root root 4096 Jun 29 16:44 jobs -rwxr-xr-x. 1 root root 252 Jun 29 16:44 manage.py drwxr-xr-x. 3 root root 4096 Jun 29 16:44 templates What is the meaning of those numbers in the second column? Do they have some relation to file and folder permissions? How do I change the numbers?

    Read the article

  • MSSQL 2005 migration to 2008 Express Edition - Any complications?

    - by FullTrust
    Hey, I've developed an application that uses ASP.NET, Linq-to-SQL and MSSQL 2005. However, I would like to migrate it to MSSQL 2008. I don't have MSSQL 2008, so I was wondering if it's possible for me to detach my 2005 db and attach it within 2008 express edition, to test if it will work on my host's MSSQL 2008 server? I haven't done anything complicated (CRUD is done from Linq to SQL, and all stored procs are the ASP.NET Membership default ones). Would this work, or will I get an error since I'm 'downgrading' so to speak? If I download MSSQL 2008 express edition, it will be on the same system as my MSSQL 2005 Developer Edition. I'm hoping this won't cause any problems? Thanks

    Read the article

  • Which is faster? 4x10k SAS Drives in RAID 10 or 3x15k SAS Drives in RAID 5?

    - by Jenkz
    I am reviewing quote for a server upgrade. (RHEL). The server will have both Apache and MySQL on it, but the reason for upgrade is to increase DB performance. CPU has been upgraded massively, but I know that disk speed is also a factor. So RAID 10 is faster performance than RAID 5, but how much difference does the drive speed make? (The 15k discs in the RAID 5 config is at the top of my budget btw, hence not considdering 4x15k discs in RAID 10, which I assume would be the optimum.)

    Read the article

  • best practices for setting development environment

    - by Sharique
    I use Linux as primary OS. I need some suggestions regarding how should I set up my desktop and development. I do work on mostly .Net and Drupal, but some time on other lamp products and C/C++, Qt. I'm also interested in mobile (android..) and embedded development. Currently I install everything on my main OS, even I use it a little. I use VMs a little. Should I use separate VM for each kind of development (like one for .Net/Mono, another C++, one for mobile and one for db only, one for xyz things etc) Keep primary development environment on main os and moveothers in VM. main os should be messed up keep things easy to organize (must) performance should be optimal

    Read the article

  • Running SQL*Plus with bash causes wrong encoding

    - by Petr Mensik
    I have a problem with running SQL*Plus in the bash. Here is my code #!/bin/bash #curl http://192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql > script.sql wget -O script.sql 192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql set NLS_LANG=_.UTF8 sqlplus /nolog << ENDL connect login/password set sqlblanklines on start script.sql exit <<endl I download the insert statements from our intranet, put it into sql file and run it through SQL*Plus. This is working fine. My problem is that when I save the file script.sql my encoding goes wrong. All special characters(like íášc) are broken and that's causing inserting wrong characters into my DB. Encoding of that file is UTF-8, also UTF-8 is set on the XSQL page on our intranet. So I really don't know where could be a problem. And also any advices regarding to my script are welcomed, I am total newbie in Linux scripting:-)

    Read the article

  • World Wide Publishing Service very slow to restart on IIS? Why?

    - by StacMan
    Every now and then, we look to restart our IIS server by restarting the "WWW Publishing Service". On most systems this would usually only take a minute or two, but this can often take up to 10 minutes to stop the server and restart. Does anyone know any way to work out what is taking so much time to stop the service? After reading up around the net I've learned this may be due to locked resources used by users and/or lower-level IIS cached items being recycled. But I cant seem to work out where I can validate if this is true on not. Also if anyone knows how to fix or speed this up, that would be excellent. We have a large codebase which contains over 280 aspx pages across the site. Our main domain contains about 100 aspx pages whilst the subdomains contain 15 or 20 each. Some specs: Code is written in C#; runs on .Net framework 2.0 Server Windows Web Server 2008 IIS7; DB running SQL Server 2008 Standard

    Read the article

  • ORA-12705: invalid or unknown NLS parameter value specified

    - by viky
    I have a j2ee application hosted on jboss and linux platform. When I try to access the application , I see following error in server.log file. ORA-12705: invalid or unknown NLS parameter value specified When I point the same jboss instance to a different schema, the application works fine. I tried to go through few forum and found that the NLS parameter settings are fine. Can anyone help. Jboss version = 4.0.2 DB version = oracle 10.2 output of locale command on linux $ locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=

    Read the article

  • Firefox "auto-complete" is very slow

    - by netvope
    Firefox version: 3.6 My places.sqlite is rather big (114MB, after being optimized by SpeedyFox.) If I turn on auto-complete, it may take 1 or 2 seconds for Firefox to accept a newly typed URL. To reproduce the issue: Type a URL into the URL bar, press enter. Nothing happens, and Firefox consumes 100% CPU (actually 50% of 2 cores) for 1 to 2 seconds Then Firefox start the network connection and load the webpage. Since it consumes 100% CPU, I don't think the bottleneck is the disk. I have some experience with SQLite and I know a 100MB DB is very small. To achieve the delay Firefox must be doing some expensive processing or inefficient queries. The issue does not appear if: auto-complete is turned off, or the URL is frequently used, or a new profile with no history is used Does anyone have any idea how to solve the problem? Should I file this as a bug? I don't want to give up my 100MB history, but I don't want to give up auto-complete either :)

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How can I determine which named property (or properties) are filling my exchange 2007 information st

    - by Mikey B
    Hi Guys, I'm experiencing the following error on an Exchange 2007 server: Event ID: 9667 Type: Error Category: General Source: msgidNamedPropsQuotaError Description: Failed to create a new named property for database "" because the number of named properties reached the quota limit (). User attempting to create the named property: . Named property GUID: . Named property name/id: . I understand that this can occur if the exchange information store is filling up with named properties... but I don't know how to determine which specific named property is at fault here. Is there a way to examine the DB for this type of info to see if there's a specific recurring named property that is consuming resources? -M

    Read the article

  • ClamAV eating up all available disk space

    - by Ra
    Today I found that my Redhat server has run out of hard disk space. The culprit seems to be a program called Clamav that fills /tmp directory with thousands of subfolders with names like clamav-004adb870cd79534. All these folders contain this: drwx------ 2 root root 4.0K Apr 21 07:56 . drwxrwxrwt 68 root root 64K Apr 21 08:03 .. -rw------- 1 root root 18K Apr 21 07:56 COPYING -rw------- 1 root root 4.6M Apr 21 07:56 main.db -rw------- 1 root root 14K Apr 21 07:56 main.fp -rw------- 1 root root 1.5M Apr 21 07:56 main.hdb -rw------- 1 root root 901 Apr 21 07:56 main.info -rw------- 1 root root 33M Apr 21 07:56 main.mdb -rw------- 1 root root 16M Apr 21 07:56 main.ndb -rw------- 1 root root 217 Apr 21 07:56 main.zmd When I deleted them they got back and filled my hard drive in about an hour again. How do I go about this? Can I safely stop Clamav? It seems to me that Clamav is trying to upgrade unsuccessfully.

    Read the article

  • Migrating data from SQL Server 2000 to SQL Server 2005

    - by Muhammad Kashif Nadeem
    I have to migrate existing data which is in SQL Server 2000 to SQL Server 2005. The schema of two databases is different. For Example Locations table in SS2000 is split into two tables and has different columns. This is one time activity. After successful migration I don't need old db anymore. What is the best way to transfer data from one SQL Server to another having different schemas? I can write stored procedures to fetch data from SQL Server 2000 and insert/update tables in SQL Server 2005. What about SSIS? I don't have any experience with this and is this better to create package of SSIS because I don't need this again and need to learn it first. Thanks.

    Read the article

  • NFS Client reports Permission Denied, Server reports Permission Granted

    - by VxJasonxV
    I have two RedHat 4 Servers. The client is 4.6, the server is 4.5. I'm attempting to mount a share from the server, onto the client via NFS. The /etc/exports configuration is as follows: /opt/data/config bkup(rw,no_root_squash,async) /opt/data/db bkup(rw,no_root_squash,async) exportfs returns these (among other) shares, nfs is running according to ps output. I've been attempting to use autofs on the client, but have opted to just mount the share manually considering the issues I'm having. So, I issue the mount request: mount dist:/opt/data/config /mnt/config mount: dist:/opt/data/config failed, reason given by server: Permission denied Ok, so let's see what the server has to say for itself. May 6 23:17:55 dist mountd[3782]: authenticated mount request from bkup:662 for /opt/data/config (/opt/data/config) It says it allowed the mount to take place. How can I diagnose why the client and server are disagreeing on the result?

    Read the article

  • I cannot access my mongodb from internet ,Anybody can help me?

    - by VicoWu110
    I am using Mongodb database ,which is installed in my ubuntu with the ip address 126.22.252.25. The ubuntu version info is Ubuntu 12.04.3 LTS. My mongodb use the default 27017 port .On this local machine , I can use command "mongo --host 126.22.252.25" on the local machine to access, but I cannot use this command on any other linux mathine to access the db,nor can I use "telnet 126.22.252.25 27017" on my windows machine.I am sure 126.22.252.25 machine is accessable from internet because I can use winSCP and secureCRT to login to it.I run command "netstat -tnlp" , it shows below: tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN I have already change the /ect/mongodb.conf file , modifying bind_ip parameter from 0.0.0.0 to 126.22.252.25 So ,anyone can help me?

    Read the article

  • Company Password Management

    - by Brian Wigginton
    The topic of personal password management has been covered in great detail time after time. This question is aimed at the business or organization that needs to keep track of many unique passwords for many clients. What are some strategies/tools or ideas you all have for accomplishing this task? I was at an Interactive Agency, where we needed to keep track of client DB, ftp, mail... and for different environments for the app so any one client would have up to 3-10 passwords usually. This can get crazy when there are more than 250 clients

    Read the article

  • Deploying new code live

    - by nicoX
    What's the best practise to deploy new code on a live (e-commerce) site? For now I have stopped apache for +/- 10 seconds when renaming directory public_html_new to public_html and old to public_html_old. This creates a short down-time, before I start Apache again. The same question goes if using Git to pull the new repo to the live directory. Can I pull the repo while the site is active? And how about if I need to copy a DB as well? During the tar (backup purpose) compression of the live site I noticed that changes occurred in the media directory. That indicated to me that files keep on changing periodically. And if these changes can interfere if Apache is not stopped during deployment.

    Read the article

  • Changing an MSSQL clustered index field from containing "random" GUIDs to sequential GUIDs - how wil

    - by Eyvind
    We have an MSSQL database in which all the primary keys are GUIDs (uniqueidentifiers). The GUIDs are produced on the client (in C#), and we are considering changing the client to generate sequential (comb) GUIDs instead of just using Guid.NewGuid(), to improve db performance. If we do this, how will this affect installations that already have data with "random" GUIDs as clustered PKs? Can anything be done (short of changing all the PK values) to rebuild the indexes to avoid further fragmentation and bad insert performance? Please give explicit and detailed answers if you can; I am a C# developer at heart and not all too familiar with all the intricacies of SQL Server. Thanks!

    Read the article

  • Selective Disable APC caching

    - by Victor
    I installed APC on my VPS and it works great with W3 Cache wordpress plugin. My problem is that there is one database in MySQL which is pinged by client end every few seconds to see if there are new updates. These db contains certain time sensitive information and hence it can't be part of cached data. How can I disable APC for this database/files? or Can I set a very short expiry of certain type of data? Any help is highly appreciated.

    Read the article

  • MySQL-5.5.10 - Lost connection to MySQL server during query (Both Web Clients and MySQL Slaves)

    - by kwiksand
    We've just upgraded our existing MySQL5.1 DB servers to newer (much better) hardware with MySQL 5.5, and things have been going mostly smoothly for almost 6 weeks. Just the last few days, I've noticed a few errors, such as: From a MySQL Slave: [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013) Or From Apache/Other: Lost connection to MySQL server at 'reading initial communication packet', system error: 110 At one point this evening, many webnodes reported this error for a three minute period (many such reports as this was in a busy period). However, the issues don't appear to correspond with any times of extreme load. For all intents and purposes, the connection/thread load on MySQL is at a normal rate (between about 10 and 40 connected threads), and Web load has been a LOT higher at times over the last few weeks. Could there bee other reasons for these connection errors, that I'm not seeing?

    Read the article

  • mongoexport csv output array values

    - by 9point6
    I'm using mongoexport to export some collections into CSV files, however when I try to target fields which are members of an array I cannot get it to export correctly. command I'm using: mongoexport -d db -c collection -fieldFile fields.txt --csv > out.csv and the contents of fields.txt is similar to id name address[0].line1 address[0].line2 address[0].city address[0].country address[0].postcode where the BSON data would be: { "id": 1, "name": "example", "address": [ { "line1": "flat 123", "line2": "123 Fake St.", "city": "London", "country": "England", "postcode": "N1 1AA" } ] } what is the correct syntax for exporting the contents of an array?

    Read the article

  • Cannot open the device or file specified for office files

    - by MadBoy
    Recently I've noticed on couple of computers that when users try to open Office files or links (to server path) to office files they get this error "Windows cannot access the specified device path or file", but the files itself open up without problems. This happened on 4 Windows XP computers already with Office 2003 installed. One one computer it was XLSX file being opened and every time user executed it, it opened up, but the error pops out. On the other hand when I open it directly from Office it works fine, without error. On another 3 computers it was after user pressed on the link to Access DB and it error out, but Access began MSI configuration (since it was first time user logged in to his computer) and in the end it opened up properly. After closing access and doing it again problem disappeared. Some faulty patch ? Eset Smart Security 4 is installed.

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >