Search Results

Search found 37714 results on 1509 pages for 'database documentation'.

Page 601/1509 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Shared resources in Windows Server 2008 were lost

    - by user316687
    We have an Oracle database in Windows Server 2003, which has its archived redo logs stored on a shared resource of a Windows Server 2008: \\192.168.1.189\d$\folder_for_archivedlogs However, according to Oracle's alert.log, at 10:01 p.m that shared resource got lost and the database was inaccessible. From my Windows Server 2003, on Windows Explorer, I couldn't access that shared resource, but I got a response when I did ping 192.168.1.189. I reviewed all the Event Logs on that Windows 2008, but there is no error at 10:00pm or 11:00pm. Has anyone seen some similar case before? (Shared resources get lost, but you still can ping the server and there are no error events in the Event Logs).

    Read the article

  • Deleted one membership table. Possible to import without breaking relationship?

    - by superexsl
    Hey, I hope this isn't going to be tricky/time consuming, so fingers crossed. I'm working with the ASP.NET membership table. However, I've got quite a few other tables that I've built, and most of them have a relationship with the dbo.aspnet_Membership table. I've accidentally deleted the dbo.aspnet_Membership table and can't get it back. There was no major data on it, (as it's on my local machine), but I would really like to copy and paste that one table from another database I have, mainly for the sake of not breaking the schema. Is this possible? I'm worried if I run the Aspnet_regsql.exe tool, it's going to break the schema and remove all data from the tables as well the relationships (which would take a while to re-establish). Is there any way I can import just the dbo.aspnet_Membership table into my current database? Thanks for the any advice!

    Read the article

  • concrete uses of LDAP?

    - by ajsie
    im new to LDAP. i wonder what are some concrete examples of using LDAP. things that are MUCH more easier to do when you got 3-7 linux computers in a small company network. one use that is very important for me seems to be that you configure LDAP to handle system authentication. then you dont have to create same accounts in all computers. are there other things that are a MUST DO for a small network to save more time? my small network is for apache servers and database servers. and should LDAP be in an own machine? cause i guess its not good to put it in apache or database servers since these are performance dependent.

    Read the article

  • Mysql Servers for Attendance System

    - by foo
    I'm building an attendance system. There are about 20 places where people will check in and check out using Mifare 1K Card. It will use MySQL as the database. The system will display something like "#ID IN: 800AM" when the first time the user checks in and "#ID OUT: 400PM" when the user checks out. For this to work, all the databases need to be synchronized with each other all the times. For an example, if user A went to location #1 to check in but by the time he wants to return home, the server at location #1 went down, he needs to go to location #2 or the nearest server to check out. The server at location #2 should display '#ID OUT: 400PM" and not "#ID IN: 400PM" since he's already checked in. So, what should I use to ensure this idea will work? My main concern is with the network (other department manages it) which is very unpredictable. It just love to go down anytime it wants to. Update LOL, didn't realize my question is not clear, just noticed it when you guys pointed it out, sorry about that. My real question is, how can I configure my MySQL to be synchronized with each other (20 servers)? MySQL cluster ? (tried reading about it, but I'm not sure if it's the right thing to do) My current setup (first phase): Local database for each server OS: Slackware A main server that keeps track which staff is at which server A web based front end for the user to see their history (which connects to the server based on their records) Main Pros No worries about network problems since it is a local database Main Cons A user can only check in and out at the same server. Databases/Servers are not connected with each others. Have to add the user to each server if the users want to check in at different locations. Which means, if he wants to go to location A, he must be checked out from location A first and then check in at location B. The server at location B didn't know that the user has checked in before at A. By the way, I've already centralized my NTP to a local server. About the network, let's just say, I don't have the authority to make changes so that the network will be better. The network won't effect all 20 servers at once, usually, just a few of them for several times a week. If there are anything else you would like me to answer, please just ask.

    Read the article

  • Using sed to convert hex characters in postgresql dump file

    - by Bernt
    I am working on moving several databases from a Postgresql 8.3 server to a Postgresql 8.4 server. It has worked fine so far, but one base has given me some trouble. The database is listed as unicode-encoded in the 8.3-server, but somehow a client program has managed to inject some invalid unicode data into it. When I do a normal dump and restore using postgres' custom format, the new server won't accept it, complaining about unicode errors. My plan is to do a plain text dump of the database, then use sed to replace the invalid characters with nothing (they are not needed). But how do you make sed work on hex/binary values in a file?

    Read the article

  • Clone MySQL DB - errors with CREATE VIEW/SHOW VIEW privileges

    - by user43537
    Running MySQL 5.0.32 on Debian 4.0 (Etch). I'm trying to clone a WordPress MySQL database completely (structure and data) on the same server. I tried a dump to an .sql file and an import into a new empty database from the command line, but the import fails with errors saying the user does not have the "SHOW VIEW" or "CREATE VIEW" privilege. Trying it with PHPMyAdmin doesn't work either. I also tried doing this with the MySQL root user (not named "root" though) and it shows an "Access Denied" error. I'm terribly confused as to where the problem is. Any pointers on cloning a MySQL DB and granting all privileges to a user account would be great (specifically for MySQL 5.0.32). Thanks!

    Read the article

  • Why is my DB read-only when attached to SQL Express, but not with SQL Web?

    - by David Rubin
    I have an .mdf/.ldf pair, originally created in 2008 R2 Standard, and well under 10GB, with ACLs: d:\db snapshot\DB_NAME.mdf SERVERNAME\SQLServerMSSQLUser$ACCOUNT$MSSQLSERVER:F OWNER RIGHTS:F BUILTIN\Administrators:F d:\db snapshot\DB_NAME_log.ldf SERVERNAME\SQLServerMSSQLUser$ACCOUNT$MSSQLSERVER:F OWNER RIGHTS:F BUILTIN\Administrators:F When I attach the database to an instance of SQL Express 2008 R2, it comes up as read-only. When exactly the same acls and user-accounts and SQLCMD statements are set up with SQL Web 2008 R2, it comes up writable. I looked at MSDN's comparison page but nothing jumped out at me. Why on earth is this happening? Thanks! UPDATE I just noticed that the name of the attached databases are different. On SQL Express (read-only) it matches the filename (e.g. DB_NAME); on SQL Web (writable) it matches the CUSTOM_NAME that I gave it in the attach command: CREATE DATABASE [CUSTOM_NAME] ON (FILENAME = 'PATH_TO_MDF'), (FILENAME = 'PATH_TO_LDF') FOR ATTACH

    Read the article

  • Is Samba Server what I'm looking for, and if so, what do I need? (currently on DD-WRT Micro)

    - by Anthony
    I am really confused as to what Samba actually does and how it works. Here's what I'm hoping it does: I set up a Samba server on my LAN, and everyone will be able to see each other's shared files and swap them. But some of the documentation makes it sound like it will just allow Mac/Linux computers to see Windows computers. Other bits of the documentation make it sound more like a local server, where a Linux machine would install Samba and they would see everyone and be visible to everyone, but that won't change if anybody else can see each other. While still other things I've read make it seem more like a file-server, where everyone sees each other but file transfers are not peer-to-peer but instead need a host disk for files to act as go between. So, assuming I'm even in the right ballpark of what Samba does in terms of my goal of total cross-visibility on the network, I am left with needing to know what I'd need to set up the server and whether it can be done and is worth it... DD-WRT's article on Samba is a bit ambiguous. One second it sounds as if I can run the server on micro as long as it's set up on a usb drive, but then it also sounds like micro can't run it at all, etc. If I can run it from a usb-connected drive, I still need to know if the files are actually stored on that drive. The dd-wrt article mentions: You can run a Samba server on your main computer and run a client on your router (thus gaining writable storage for the router) or you can use Samba to share a drive connected (typically by USB) to the router among all the computers connected to your network. That one part "to share a drive...among all the computers" makes it sound like the only benefit I get from Samba is a share drive that any OS on the network can see, but they still won't see each other. But I'm very hopeful I'm misreading this. If the computers can see each other but still need the disk, how much space is generally a good idea? I'm basing this on the idea that the drive is a temporary store point. Obviously I'd have to get a drive big enough to store everything people wanted to share if the drive is a full-on file server. If I do have this all wrong, is there any software that achieves what I have in mind? Something that connects to the main router to bridge all clients?

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • I am trying to setup phpMyAdmin to use with a remote MySQL databases on Scientific Linux release 6.2

    - by techsjs2012
    I am trying to setup phpMyAdmin to use with a remote MySQL databases on Scientific Linux release 6.2. If I use the mysql command line to connect to the remote database it works great but if I use mysqladmin I am getting "#2002 Cannot log in to the MySQL server". I have found if I do a: setenforce 0 It will work from myphpadmin to my remote database but once I reboot or set Scientific Linux setenforce back to one it stops working again.. I know setenforce 0 is not the right thing to do but can someone please give me details steps on how to get this working the right way... thanks I am new to Scientific Linux and been having some issues.. thanks

    Read the article

  • MediaWiki alternatives for small business?

    - by Jakobud
    Are there any good alternative wiki's to MediaWiki out there for a small business? Mostly we are just wanting to use it for documentation. MediaWiki is a fine (and slightly outdated) piece of software, but it doesn't even officially support basic things like controlling who has access to what page/article, which is an important feature for us.

    Read the article

  • writing data onto a linux live-dvd

    - by stanleyxu2005
    I have a server machine with a dvd-writer. I want to burn a linux live-dvd (openSUSE is preferred) with a pre-configured web server, so that after booting the web server should be ready to serve. The web server has a sqlite database (with very less data). But after rebooting the system, all data in the database will get lost. Is it possible to store all necessary data onto this live-dvd as well? If it were a usb drive. I would create two partition and mount the second partition with read-write permission. But I have no idea how to create two partition onto a dvd Any hint is appreciated

    Read the article

  • should I put my multi-device btrfs filesystem on disk partitions or raw devices?

    - by Glyph
    If I'm going to create a multi-device btrfs filesystem. The official recommendation from the documentation apppears to be to create it on raw devices; i.e. /dev/sdb, /dev/sdc, etc, but this is not explained. Are there any advantages to creating a partition table on these devices first, either GPT or MBR, and then creating the filesystem on /dev/sdb1, /dev/sdc1 et cetera? Does feeding btrfs whole devices have some particular advantage, or are these basically equivalent?

    Read the article

  • NetApp and Hyper-V 2012 best practice/whitepapers?

    - by grimstoner
    We've recently acquired a NetApp/Cisco UCS solution, and I'd like to gather some background knowledge as to the best practices when setting up Hyper-V 2012 on such a solution. There is an upcoming seminar (in the Netherlands, http://www.realdolmen.com/nl/MSHyper-v-2012_NetApp), but it's in Dutch, and a couple of weeks away... Does anyone have some whitepapers/documentation about such a setup, or hasn't it been done before?

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Equivalent of phpMyAdmin for MSSQL?

    - by Tedd Hansen
    Is there any webinterface for administrating MSSQL similar to phpMyAdmin (for MySQL)? I want a self-service setup where developers can create a database through webinterface and upload/download backups of the database without local access. I've considered phpMSAdmin, but it hasn't had a release since 2006 so I'm not sure its worth the effort of setting it up. If there is something else (free or not-so-free) that would be great. My question is similar to this one posted 2 years ago, but no good webinterface was found back then. SQL Web Data Administrator seems interesting, but it lacks a few features - most notably creating new databases (also, not updated since 2007).

    Read the article

  • MySQL/Apache: Replace spaces with underscores only in certain URLs

    - by javipas
    I'm having a problem with some images I'm using on my WordPress blog. After a migration I renamed every image replacing spaces with underscores, so HIDDEN_264_4062_FOTO_IDF los MID.jpg was renamed to HIDDEN_264_4062_FOTO_IDF_los_MID.jpg But althought the trick was necessary and worked for most of the posts, some of them try to find the old image, with spaces: This is not found http://www.example.com/files/HIDDEN_264_4062_FOTO_IDF%20los%20MID.jpg and this should be the right URL http://www.example.com/files/HIDDEN_264_4062_FOTO_IDF_los_MID.jpg Careful, though, 'cause the "%20" is only shown on the browser: the text on the database shows spaces, not "%20". I'd like to know if maybe I could make a SQL query in my WordPress MySQL database that replaces spaces in .jpg files with underscores. The path of the images is always the same, so the rule should transform this: /files/HIDDEN_264_4062_FOTO_IDF los MID.jpg /files/HIDDEN_264_4062_FOTO_IDF_los_MID.jpg the "/files/HIDDEN_264_" part is always the same, but the rest varies. Is some way to perform this? Maybe a rewrite rule on Apache (our current webserver)?

    Read the article

  • Good Booking Engines Suggestions

    - by user28139
    I want to get some suggestion for a Customizable/Opensource Booking Engine for Hotels That you've used or had coded them. The booking engine I'm looking for is the one you can add and customize their fields (address,rates, and other stuffs). Also I can easily integrate it to my existing site. I was looking at CultBooking but I've been having hard time understanding their interface and documentation. Thanks in advance!

    Read the article

  • Restore dpm 2010 protection groups from partitions

    - by Dragouf
    Hello, I have Data protection manager (DPM) 2010. I did a backup of my system which has been saved into different partitions. The computer running DPM crashed and is not allowing me to restore the backup. However, i still have all the backups as partitions. How can I restore the multiple protection groups from the physical existing partitions? I have been researching the msdn documentation for a solution, but no luck so far. Thanks for your help

    Read the article

  • IIS Hangs on SQL Connections when running ASP.net applications

    - by PaulWaldman
    We have a database server running SQL 2000 and two web servers hosting ASP.net applications. All three servers are running Windows Server 2003 SP2. Our issue is repeatable after about 2 weeks, IIS on one web server is no longer able to establish SQL connections. Static content loads fine. Other non-IIS applications are still able to contact the SQL database server. ODBC functionality also still works. While running SQL profiler a connection is never established from IIS when it is in this state. The only way to fix this situation is to restart the web server. There are no firewalls installed on any machines.

    Read the article

  • How do I create a bridged virtual network for libvirt+KVM+ubuntu-server the easy way?

    - by Arthur Ulfeldt
    I see lots of documentation on how to manually set up a network bridge and then manually add vm's tun devices to these bridges and then write a shell script that glues it all together. lots of work, very manual, and not impressive. On the other hand if you want to use NAT to KVM+libvirt VMs to the network you just click the new network button in the virt-manager gui and relax. Am I missing "the easy way" of causing a VM to share the physical network with the host?

    Read the article

  • Openfire Installation Issue - Can't Login to admin panel

    - by Lobe
    I am trying to get Openfire to install on an Ubuntu virtual machine, however upon completing the web based installer, I am unable to login to the admin panel. So far I: downloaded Debian installer Installed using stock options Added database and built the structure using supplied SQL file Completed web based installer I am now trying to login using username: admin and my password, however I constantly get a wrong username/password error. There is a record generated in the MySQL database showing the admin user with an encrypted password, and changing to an unencoded password doesn't work. What is the problem here?

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • How to make a JBoss service to handle Protocol Buffers directly?

    - by mlaverd
    Hello everyone, I'm interested in building a JBoss service. Because I'm reusing some existing code, the service must be able to talk SSL/TLS and Protocol Buffers. The documentation I see on the JBoss wiki makes it look like services have their transport and data interpretation handled by JBoss itself. Is it really the case? How could I implement this requirement?

    Read the article

  • Can I chain authentication methods in Apache?

    - by jldugger
    I've got an existing SVN system that we're migrating away from SVN AuthUserFile (a flat file format) to LDAP authentication. In so doing, we'd like to establish a transitional phase where both LDAP and AuthUserFile work. Does Apache support fall through authentication mechanisms? I'm reading the documentation and it's still not clear either way.

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >