Search Results

Search found 61623 results on 2465 pages for 'data storage'.

Page 101/2465 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Error in data view when connecting to an Oracle DB

    - by Mike Polen
    When using SharePoint Designer I found this link that stepped me through how to get it working: http://spsolution.blogspot.com/2008/12/how-to-insert-data-source-in-sharepoint.html That allowed SharePoint Designer to talk to Oracle, but when I placed a data view on a page it gave me the following error: Error while executing web part: System.Data.OracleClient.OracleException: ORA-00923: FROM keyword not found where expected at System.Data.OracleClient.OracleConnection.CheckError(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, Boolean needRowid, OciRowidDescriptor& rowidDescriptor, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OracleClient.OracleCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.Syst... 09/14/2009 14:40:23.52* w3wp.exe (0x0FA0) 0x1A88 Windows SharePoint Services Web Parts 89a1 Monitorable ... em.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigatorInternal() ... 09/14/2009 14:40:23.52* w3wp.exe (0x0FA0) 0x1A88 Windows SharePoint Services Web Parts 89a1 Monitorable ... at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator() at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator(IDataSource datasource, Boolean originalData) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.GetXPathNavigator(String viewPath) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform() I am mystified.

    Read the article

  • Tape Storage - How do I setup a tape backup system for use with my NAS

    - by John Himmelman
    I currently have a QNAP NAS with a raid 5 config (~600gb storage) but don't have a reliable backup solution. I've heard great things about tape backup systems (reliability, durability, etc..). How can I go about setting up a tape backup system? The tape drives seem very expensive (1k+ for a decent one, more than the price of my NAS). What are the important specs to compare and features to take into consideration? Edit: Does anyone have links to some good resources? There is a ton of articles, guides, and sites on this subject, not sure where to start.

    Read the article

  • Installing Intel Rapid Storage Drivers makes my eSATA Drives act weird

    - by Filip Ekberg
    I have a HP 8530w Elitebook this Laptop got an eSATA port which I want to plug my LaCie d2 Quadra V2 1TB harddrive into. It all works well on a fresh install of Windows 7 without the Intel Chipset drivers installed. However when I install the Intel Rapid Storage drivers or the Intel Matrix software my drive seems to "disconnect" when I use it to much. I have a lot of Virtual PC's on the drive and when I start them the disk somewhat disconnects. What could cause this?

    Read the article

  • Setting default version on Azure Blob Storage?

    - by Erik
    What is the easiest way, without having to create your own utility, to set the default service version to the latest in Azure Blob Storage ? http://msdn.microsoft.com/en-us/library/windowsazure/dd894041 There is basically nothing to be set in the Azure portal and I am having a difficult time finding working utilities to use for Azure. For some reason Azure is defaulting to the oldest version which does not send things like the http range header for example. Any utility that can do this ? Thank you.

    Read the article

  • Windows Storage Server 2003 R2 RDP Disconnection

    - by Antitribu
    I am unable to remote desktop to our Windows Storage Server 2003 R2 machine. There are a few people about with similar issues but no answers around. Symantec End Point is installed but no network protection. Using rdesktop in Linux I receive: ERROR: send: Connection reset by peer NOT IMPLEMENTED: PDU 12 ERROR: Connection closed Where the "PDU XX" number changes on each connection. In windows using the latest mstsc I receive: The connection was lost due to a network error. Try connecting again. If the problem persists, contact your network Administrator. Any ideas?

    Read the article

  • How does fail2ban 0.9 database storage actually works?

    - by Arantir
    Fail2ban 0.9 introduce database storage to save bans on restart. But I can't find out the actual mechanism of it work. There is dbpurgeage parameter which controls lifetime of old bans, defaults to 24 hours. As I see from code research, fail2ban saves a ban to the db with timeofban equals to the moment of ban being saved. Then every dbpurgeage period it removes all bans with timeofban < MyTime.time() - self._purgeAge, in other words removes all bans have been stored more than 24 hours ago. But what if an IP was banned for the month? Does all this mean that with dbpurgeage = 86400 after restart in 24 hours I will lost all bans longer than 24 hours? I just want that all my permanent bans will be preserved in any case.

    Read the article

  • Sharing external storage between different operating systems?

    - by CT
    I just received a Lacie 1TB external usb/firewire/esata drive. I have 2 machines. A macbook running osx 10.6 and a desktop running windows 7. I would like to rip my dvd collection to iso and store on my external. Right now I use my macbook's disk utility to rip dvds to iso. However my desktop is what is connected to my hdtv. I mainly just use the desktop for media. I'd like to format the external with a 200 GB partition for time machine backups and have the rest for storage. DVD iso are often above 8GB so that sort of eliminates FAT file systems correct? Do I have any options to be able to have my mac and pc both see the drive?

    Read the article

  • VMWare esxi 4.1 storage errors with MD3200

    - by Karl Katzke
    We're seeing some storage errors from the esxi logs relating to our MD3200. I'm sort of a VMWare noob and am not sure where to go from here because I couldn't find a lot of documentation on the VMWare website, and the forums didn't seem to have any posts about it with actual answers. Everything is working, but I'm trying to proactively troubleshoot this. sfcb-vmware_base|StoragePool Cannot get logical disk data from controller 0 sfcb-vmware_base|Volume Cannot get logical disk data from controller 0 sfcb-vmware_base|storelib-GetLDList-ProcessLibCommandCall failed; rval = 0x800E The ESXi boxes are connected directly via SAS to the controller on the MD3200. What do these errors actually mean, and what's a good path to start troubleshooting or solving them?

    Read the article

  • What is the easiest way to get MySQL's Archive Storage Engine working on CentOS 5.4

    - by tronda
    The Archive Storage Engine is not enabled by the default build of MySQL in CentOS/RHEL. I would like to enable it on our CentOS 5.4 server. My initial reaction was to modify the SPEC file for the SRPMS file, but this indicates that this might not be that easy. There's always the option to build from MySQL source, but I would prefer if possible to stay within the RPMS/Yum world. Does anybody have a successful approach to this by using RPMS/SRPMS/Yum? Some patches which makes this work flawless with SRPMS?

    Read the article

  • HALEVT troubleshooting: VFAT usb storage device gets mounted with root:root user:group

    - by Nova deViator
    Hi, i'm banging my head for number of days around this problem. using Halevt for automounting, everything mostly works, but the only thing is that Halevt mounts external USB storage devices as root. So, as user i cannot write to files on them. Halevt gets run as halevt user on boot through /etc/init.d script. This is Ubuntu Lucid with Awesome WM. No GDM. Running halevt as user seem to not work (halevt runs but doesn't respond on Insert) I know HAL is deprecated and removed and i should probably write my own UDEV rules, but until then it seems there must a be simple hack that enables mounting VFAT/NTFS devices with specific uid/gid. this question/answer helps a lot, but not specifically to the above.

    Read the article

  • Triple Boot - With storage partition

    - by art
    I'm new to the multi-boot world, as i used to rely on virtualizing for running linux. Recently i moved to Dual booting Windows 7 and Ubuntu, with a storage partition for all my files where both operating systems could access them. Is it possible, to have 1 partition for 7, another for XP, another for Ubuntu, and a separate partition where the OS's can access my files? so 4 partitions on my hdd. and if there's a better way to go about this (or if its not possible), please let me know! thanks

    Read the article

  • How to present shared storage for MS Cluster Services running on vSphere 5

    - by MDMarra
    I've seen two approaches to handling the presentation of shared storage to Windows Server 2008 R2 cluster VMs on VMWare vSphere. One is the traditional method of carving out a LUN on your SAN and presenting it to both hosts through the Microsoft ISCSI software initiator. The other method is to make a vmdk on an existing LUN and attach it to both hosts and made it an independent disk so that it isn't affected by snapshots. Is one way the "correct" way, or are both viable? Is there any advantage or disadvantage to doing either?

    Read the article

  • How does the data storage work? [closed]

    - by Andres Adhi
    I am really new to the whole concept of Data storage, Domain, Server and everything else related to this. Can someone pleases explain what a Domain is? How are server part of the Domain and How are Database stored in the Server or Domain? How does a new server be able to connect to existing database server to get all the data needed. I tried to find this information in the web but I am not really finding a good resource. It may be because these is really basic information. I will really appreciate if someone can explain these concept in plain terms. Thanks in advance.

    Read the article

  • VMWARE V2V migration without shared storage

    - by TheCleaner
    I would like to do the following: Take Physical box running w2k8r2 and Sql2008R2 and do a P2V on it to a 4.1 Enterprise licensed cluster. --no worries here, I can do that part-- Take the existing physical box that is freed up and install vmware hypervisor 5.0 on it. --again I can do this part-- Do a v2v migration of the VM created in step #1 above from the Enterprise 4.1 Cluster to the standalone host. They are NOT using shared storage. Step #3 is where I'm confused as to what my best option is. I found an article online talking about using Veeam FastSCP and just shutting down the vm on the cluster, removing it from inventory, copying the files over to the new host and then adding it to inventory. Is that the best way to accomplish this?

    Read the article

  • Need a place to store a few bytes of meta information on storage media

    - by Jason C
    I'm working on an embedded project. I need a place to store some filesystem-independent meta information on a storage device. The device has an MSDOS partition table. The device also may have unallocated space (depending on its size) but it will be TRIMmed (and also may be blown away by new partitions in the future). I need a location on the device that is not unallocated and that has a low risk of being touched (outside of completely erasing the device). The device is only guaranteed to have an MBR at the point the meta data needs to first be written; meaning there are no EBRs/VBRs present that I could use. There are 446 bytes at the very start of the device available for MBR bootstrap code. Currently my only idea is to store data at the end of this block. However, the device is bootable and I have no way of knowing if I'd be blowing away bootstrap code or not. The sector size is 512 bytes and the MBR is the first sector, I'm pretty sure (correct me if I'm wrong) that that means the second sector is available for use by partition data, so I can't use that either. Does anybody have any ideas? I need 4 bytes of space.

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Windows Storage Server 2003: Shadow Copy (VSS) enabled for volume, but Previous Version not visible

    - by Jaap
    Windows Storage Server 2003 We have Shadow Copy (VSS) enabled for one volume. However Previous Version tab is not visible on the server on properties dialogs on any file or folder on that volume. We don't want this tab to be visible on the clients, just on the server. I've checked the VSS settings and they're definitly enabled for the volume. I'm stuck! Do I have to install the client-software on the server? The folder \%systemroot%\system32\clients\twclient contains 3 empty subdirs (location copies from docs)...

    Read the article

  • Unknown/unsupported storage engine: InnoDB | Mysql Ubuntu

    - by Kayle
    I recently upgraded from the previous LTS Ubuntu to Precise and now mysql refuses to start. It complains of the following when I attempt to start it: ?$ sudo service mysql restart stop: Unknown instance: start: Job failed to start And this shows in "/var/log/mysql/error.log": 120415 23:01:09 [Note] Plugin 'InnoDB' is disabled. 120415 23:01:09 [Note] Plugin 'FEDERATED' is disabled. 120415 23:01:09 [ERROR] Unknown/unsupported storage engine: InnoDB 120415 23:01:09 [ERROR] Aborting 120415 23:01:09 [Note] /usr/sbin/mysqld: Shutdown complete I've checked permissions on all the mysql directories to make sure it had ownership and I also renamed the previou ib_logs so that it could remake them. I'm just getting no where with this issue right now, after looking at google results for 2 hours.

    Read the article

  • Routers with USB plug to connect external storage

    - by sixtyfootersdude
    I am just about to buy a new wireless router. I want to be able to hook up a harddrive to it and let the harddrive serve the entire network. I will mostly be storing media and some backups on the drive. I know I could get some kind of NAS but I would prefer to just hook up one of my many unused hard drives directly to my router. It looks like d-link has several products that do this using shareport. If you were wanting to have network storage how would you do it? With a NAS? Using a router with a USB port. Are these systems robust? What router would you buy?

    Read the article

  • "System volume folder" always appearing in USB storage stick

    - by ?????? Oyewole
    Whenever I move or copy video files from the PC (Windows 8.1) to my USB storage device and plug it into my TV, I always see a system volume folder on the USB device. This folder can be seen on the PC also, if I choose "view protected system files". My flash drive is formatted with a FAT32 file system. The question is, why is this happening on Windows 8.1, since I never had this problem on Windows 8 before upgrading, and how can I disable this feature?    OK, that's two questions.

    Read the article

  • Storage drives is causting system crash

    - by Chad
    I'm running Centos 5.4 with 750GB(ntfs) and 2TB drives for storage. Originally I installed the 750, everything seemed fine and then I installed the 2TB drive with NTFS already partitioned. I noticed when I would copy a lot of videos it would crash (no mouse or response from server) about 20min into it. After doing some troubleshooting I noticed the 750 would also crash when doing the same task so I decided that NTFS may be the problem. I unmounted the 2TB drive and tried to partition and format it using ext2 but when using parted it would crash at this point "writing inode tables". Looking at the dmesg logs I believe this is the error "mtrr: type mismatch for e0000000,10000000 old: write-back new: write-combining". Any idea as to what could be causing this?

    Read the article

  • Using Windows Azure storage for backup

    - by Bruno
    I am currently looking at Windows Azure blobs as an option for backing up archive data. I want to be able to upload files from an external windows machine via the internet but I don't know enough about Windows Azure storage to make a decision. Some of the questions I have are How do I upload the files. Is there a client application, can I use robocopy? Would it be fast enough? i.e. Could I download or upload 1TB of data in a week? Is it secure? Hopefully someone smarter than me can help me :-)

    Read the article

  • Accessing a storage-side snapshot of a cluster-shared volume

    - by syneticon-dj
    From time to time I am in the situation where I need to get data back from storage-side snapshots of cluster shared volumes. I suppose I just never figured out a way to do it right, so I always needed to: expose the shadow copy as a separate LUN offline the original CSV in the cluster un-expose the LUN carrying the original CSV make sure my cluster nodes have detected the new LUN and no longer list the original one add the volume to the list of cluster volumes, promote it to be a CSV copy off the data I need undo steps 5. - 1. to revert to the original configuration This is quite tedious and requires downtime for the original volume. Is there a better way to do this without involving a separate host outside of the cluster?

    Read the article

  • KVM guest storage difference with NBD and NFS

    - by WojonsTech
    I am setting up my own little private cloud for my own use maybe for a project or to. I am using linux kvm on debian 6. I have 3 servers 2 of them for compute nodes and 1 storage node. I would I have already installed kvm made a few test machines got my networking setup. I have 2 nics on each server 1 nic is for web traffic other nic is for network traffic. My first Idea was to use NFS for storing the guest machines which can range in size, maybe 8gb maybe 100gb, it just depends. I was doing have heard of nbd before seems like it could work but I dont know what the performance differences are and if it will effect my enviroment, nfs looks like it will be easier to use.

    Read the article

  • FreeNAS plugins not able to access storage

    - by dave
    I've just setup a FreeNAS box and have a couple plugins (sick beard and SABnzbd) installed. Both of these have you select a directory where downloads should go. My storage is on /mnt/MediaVolume/ however when I navigate to mnt it's an empty directory. When I SSH to the box though, I can see it just fine. I'm thinking it may have something to do with permissions, but I'm not sure. Any suggestions how to allow these plugins to view/have access? Thank you!

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >