Search Results

Search found 64715 results on 2589 pages for 'spring data rest'.

Page 255/2589 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Sharing information between applications

    - by Zé Carlos
    My question is very simple to expose: I have a few aplications that share data between then. I need a way to support that data sharing (cross several computers) and when one aplication changes data, others should be notified about that. My question is about what tecnologies could be usefull to me. The solution i realise at this moment is to have a database to share data and an external publish-subscribe system (like http://pubsub.codeplex.com/) to notify all applications when the data is changed. But i belive that could exist some helpfull solutions. Do you know any of then? Thanks.

    Read the article

  • Copying just the data from one Database to another

    - by monksy
    I'm not sure if this is the site for this question or not [if so put in the comment or vote to move it] How can I copy only the data from one database to another within the same server on SQL Server 2005? The two databases have the same schema but not the same data. I'm trying to get the data from one database to another. I am not able to restore from a snapshot [that screws over the security settings on the database]. I'm not able to use the import data wizard, because that is trying to copy over schema data as well.

    Read the article

  • web service data type (contract)

    - by cyberguest
    hi, i have a general design question. we have a fairly big data model that represents an clinical object, the object itself has 200+ child attributes in the hierarchy. and we have a SetObject operation, and a GetObject operation. my question is, best practice wise, would it make sense to use that single data model in both operations or different data model for each? Because the Get operation will return much more details than what's needed for Set. an example of what i mean: the data model has say ProviderId, and ProviderName attributes, in the Get operation, both the ProviderId, and ProviderName would need to be returned. However, in the Set operation, only the ProviderId is needed, and ProviderName is ignored by the service since system has that information already. In this case, if the Get and Set operations use the same data model, the ProviderName is exposed even for Set operation, does that confuse the consuming developer?

    Read the article

  • how divide herader from binary data

    - by fixo2020
    Hi, I have this code: ofstream dest("test.txt",ios::binary); while (true){ size_t retval = recv (sd, buffer, sizeof(buffer), 0); dest.write(buffer,retval); if(retval <= 0) { delete[] buffer; break;} } Now, the recv() function return 4 bytes each loop right? and buffer contain it, this return all data so, pseudo-header and binary data (image), but I want know how capture only binary data, I know that the end of header are "\n\r" right? but what's are the solution better for make this? I make a function that detect when are "\n\r"? and after how capture binary data? Or, I put all data in memory, and after parse it? but how? I'm desperate :(

    Read the article

  • PHP Losing variable data

    - by Conor B
    Hi, I'm having an issue with PHP losing data in a variable. There is quite a bit of data in the variable, because it basically contains a binary file, but I'm wondering if this is cause for it to completely lose it's information. Looking at a snippet from my code which is used to deal with email attachments: var_dump($data) if (array_key_exists('filename', $params) || array_key_exists('name', $params)) { var_dump($data) ... } The first var_dump gives the desired output of the file: "string(283155) " --Apple-Mail-5-930065543 ... etc while the second gives an output of: string(0) "" ... string(0) "" Any idea why this is happening? Does PHP just drop data in variables if they are really large? (I didn't think so, as I've never had this problem before) If so, any workaround? Thanks!

    Read the article

  • IntelliJ IDEA 10 est-il « l'EDI Java le plus intelligent » ? Sa nouvelle version met en avant Android, Spring et GWT

    IntelliJ IDEA est-il « l'IDE Java le plus intelligent » ? Sa version 10 met en avant Android, Spring et GWT JetBrains est une entreprise de conception de logiciels connue pour son environnement de développement intégré ? et primé - Java IntelliJ IDEA. L'IDE vient de sortir officiellement hier en version 10. Une version majeure grâce à une série de nouvelles fonctionnalités, y compris dans son édition gratuite. La plus importante étant certainement le support d'Android. JetBrains décrit son IDE d'outil de « développement polyglotte basé sur JVM ». La version 10 apporte de nombreuses améliorations aux nombreuses technologies et structures qu'il s...

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • Exporting Client Data from Groupwise 6.5 to Outlook 2010 without Crashing

    - by Adam Doherty
    My employer has recently moved from Novell GroupWise 6.5 to Exchange 2010. We've imposed mailbox limits on staff but we still need to move their old messages, contacts, calendars, etc. over to Outlook 2010. Our problem however is this, utilizing the Novell MAPI client is slow within Outlook 2010 and upon exporting messages to a PST file (for later re-attachment, and offline backup purposes) crashes the GroupWise server. Connecting to the server in Outlook via IMAP to export messages to PST is faster and apparently more stable but also crashes the server. We'll be keeping our GroupWise server online internally until then end of the year but I have staff with mailboxes approaching 12 gigabytes, which is fine if we're going to move the data to offline storage (DVD set) but if I keep crashing the server every time I try to get the data I'll just be spinning my wheels. In my first attempts, I tried to move mail for a staff member with 3GB of data. The transfer lasted roughly 8 hours before crashing. I'm wondering if there is an open source solution to my problem. Paid solutions exist but we're a not-for-profit organization and have too many staff to justify the costs of per seat licenses just to migrate mail.

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • IIS FTP service - download timeouts and restarts getting the data twice

    - by accel229
    We have an IIS FTP site on a Windows Server 2003 x64 machine. Application Layer Gateway service is disabled (so http://support.microsoft.com/kb/931130 does not apply). Windows Firewall service is disabled as well. Connection timeout for the FTP site (there is only one) is set to 1,200 seconds = 20 minutes. An external client can connect to the site, list directory contents and download small files. When a client attempts to download a large file (eg, if the download continues for 3 minutes, which is still under 20 minutes, but relatively long), the server sends all data, then the connection times out, the client issues REST / RETR commands attempting to restart the download since after the last byte (which I believe should succeed and receive exactly 0 bytes), and the server behaves as if the client tried to restart after byte 0, that is, it sends the entire file all over. Any ideas on how to fix this?

    Read the article

  • Working of trashcan utility in tru64 Unix server.. or any other utility??

    - by RBA
    Hi, I used this mktrashcan command mktrashcan deleteMe1 trashcan/ And then i Deleted all the contents inside deleteMe1 directory(rm -rf*).. But then what happend is only the two text files which are inside the deleteMe1(deleteMe2.txt, deleteMe3.txt) directory were moved into the trashcan folder.. Rest of the directories and files inside the directories were not foundd!! Isn't there any other way, so that whatever is deleted, moves exactly the same way to the trashcan directory??? Or is there Any Other Utility that can perform the same task but in advance way.. mkdir deleteMe1 mkdir deleteMe1/deleteMe2 mkdir deleteMe1/deleteMe3 touch ./deleteMe1/deleteMe2/deleteMe4.txt touch ./deleteMe1/deleteMe2/deleteMe5.txt touch ./deleteMe1/deleteMe3/deleteMe6.txt touch ./deleteMe1/deleteMe3/deleteMe7.txt touch ./deleteMe1/deleteMe2.txt touch ./deleteMe1/deleteMe3.txt Thankss..

    Read the article

  • Some URLs fail to load on Windows web portal

    - by jpolache
    I’m working in a large data center and have been assigned to troubleshoot and issue with a windows (IIS) web server that acts as a portal for a customer of the data center. This portal server is on a DMZ at the local data center. I don’t have access to the portal desktop and am relying on an off-site administrator to work with me to do testing and report the condition of the portal. He tells me there are no software firewalls or other filtering configured. While most of the remote web pages work fine, several of the URSs the portal is suppose to serve up fail to load. I had wireshark installed on the portal system and had a capture taken of one of the failures. I used IE to access one of the remote web servers at issue. I could see the TCP SYN-ACK coming back from the remote server, but after several HTTP GETs fail to get a response the portal server sends a reset. The webmaster of the remote web server assures me that no sites are being blocked. I had a capture taken outside the local firewall, so there should be no issue there. Another tech set up a laptop and used the IP address of the portal (we took the portal off-line for the test). The laptop loads the URL as expected. I tried having Firefox loaded to make sure that the HTTP GET was not mal-formed. Same failure as with IE. So, it seems it is not the remote web server or the network, because there was no problem with the laptop. At this point, I’m not sure what other questions to ask or tests to do.

    Read the article

  • SQL Server 2008 data directiories in SSD

    - by Kuroro
    I am going to install a new SQL server 2008 instance on my development/testing machine. My machine have one 7200rpm 500GB SATA Disk (C:OS) and one Intel X25-G2 80GB SSD(D:). Details machine config is as follow: CPU:i7 860 RAM:8GB Microsoft said I have an option to place following directories in different disk. So I plan to place User database & Temp DB on SSD and rest of it on traditional disk. Is it a good choice for gaining a performance boost in fast SSD? Data root directory :C:\Program Files\Microsoft SQL Server User database directory D:\Data User log directory C:\Logs Temp DB directory D:\TempDB Temp Log directory C:\TempDB Backup directory C:\Backups

    Read the article

  • changing filesystem format from xfs to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux. Update The /home filesystem was xfs not jfs, and it seems there is a bug with this filesystem for some reason, I had to re-install the OS twice until I ended up with ext4 for the entire / However, as a conclusion, it seems that there is no way to make a conversion

    Read the article

  • Western Digitial My Book: Can't access the data on the drive

    - by Bryan Denny
    My girlfriend has this external hard drive by Western Digital called a My Book. When the external drive is connected, it does not show it as an accessible disk drive on the computer. However, it shows up fine in Device Manager: I can also see it in Disk Management, but the volume is not mapped to a drive letter, nor can I change the drive letter: It only gives me access to Delete Volume: I would rather not lose the data on the drive if possible. What can I do from here to get to the data? Things I've tried/know: Uninstall drivers and re-install them Device does the same thing when attach to either her Win7 laptop or my Win8 laptop I don't think there's an issue with the HDD itself. No clicking noises, etc. I ran Western Digital Data LifeGuard Diangostics (DLGDIAG) and the SMART Status was a "PASS", all of the SMART Disk Information looked fine. I haven't had the time to run the diag tests yet but I do not believe it's a mechanical issue. The hard drive is inside of an enclosure, I have not attempted to pry the drive out yet. How can I get Windows to properly detect this drive?

    Read the article

  • Looking for recommendations on OCR problem - tabular numeric data

    - by ldigas
    I have 20 pages of experiment measurement data which I need to digitalize. The results are in tabular form, scanned in 600 dpi resolution, and as far as scans go, they came up pretty clean and readable. For an example of how it looks see here (but beware: it is a rather big scan; about 5Mb; no problem for any broadband connection, but dialups should approach with caution!) ... and I need it finished by sunday afternoon (:-o) <-- smiley in a state of panic (then why did't you start sooner?)... yea, yeah ... I know ... but, it came up late, and I wasn't thinking I was gonna need this data also. So, I'm looking for recommendations. I haven't much experience with OCR programs, save scanning a page or two of pure text, but just to mention, I haven't the wish also to test out every OCR program out there. So this isn't a "name your OCR favourite". What I'm looking is advice from someone who's done something like that, and his/hers experience on what would be the best way to undertake. I need the data in txt form but since it will have to be checked (by drawing it, and just simply watching whether some points "jump out") I'll probably be entering it in Excel at first.

    Read the article

  • Divide pivot table data by an arbitrary column in another table

    - by rsavu
    Hello all, I have this data from a pivot table: Countries P1 P2 Total Country 1 10 69 Country 2 36 2 92 Country 3 21 24 100 Country 4 22 77 Country 5 13 79 Country 6 12 1 48 Country 7 14 29 Country 8 22 1 46 Country 9 4 1 31 Country 10 16 7 120 Country 11 25 2 114 Country 12 8 11 68 Country 13 5 27 Country 14 11 3 23 Country 15 6 19 Country 16 33 79 Where: 1st column is the country name 2nd and 3rd column are the tickets introduced in the system 4th column is the total (disregard the data - total is not accurate) Additionally, I have another table that looks like this: Country P1 P2 Country 1 2 3 Country 2 2 2 Country 3 0 2 Country 4 0 3 Country 5 1 1 Country 6 2 2 Country 7 1 2 Country 8 3 3 Country 9 1 4 Country 10 2 1 Country 11 4 2 Country 12 2 1 Country 13 3 2 Country 14 3 3 Country 15 1 2 Country 16 2 2 Where the data represents the number of users of the application in each country. I want to be able to show the number of tickets submitted divided by the number of users in each country. Any ideeas how to do that? Thank you very much, Razvan

    Read the article

  • SQL Error (1064) when importing data from SQL file

    - by mejpark
    I have a MySQL database, which was originally set up with the default latin1 character set and latin1_swedish_ci collation. I was using the database like this for sometime, until I noticed strange characters on my production web site, which is powered by a database exported from my development machine. At this point, I changed the default character set of the database and tables to utf8 and the collation to utf8_unicode_ci, converted the latin1 data inside each table to utf8 (using the 'convert data' option) and exported the database as a single SQL file using HeidiSQL. When the resulting SQL file is opened in Notepad++, several characters are rendered incorrectly. For example, en dashes (-) are displayed as – and e with accent (é) are displayed as é. I changed the encoding of the file from ANSI to UTF-8 (using the encoding menu option in Notepad++) and the offending characters are rendered correctly. I saved the new utf8-encoded SQL file and attempted to import the contents into the MySQL database on my production server. The import process fails with following error: /* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?# -------------------------------------------------------- # Host: ' at line 1 */ /* Error with snippets directory: The specified path was not found */ The head of the SQL file: # -------------------------------------------------------- # Host: 127.0.0.1 # Server version: 5.1.33-community # Server OS: Win32 # HeidiSQL version: 6.0.0.3773 # Date/time: 2011-04-20 09:48:36 # -------------------------------------------------------- It chokes on the first line of the file, which is commented out. Why is this happening? I didn't have a problem loading data from SQL files until I changed the character set and collation of the database. I came up with an ugly workaround to this problem by performing following steps: Export database as single SQL file using HeidiSQL Open resulting file in Notepad++ and convert from ANSI to UTF-8 encoding Create new empty file in Notepad++, paste in UTF-8 and save file normally What am I missing here?

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • data protector red tapes

    - by Caesar
    I am using HP Data Protector A.06.11 in my organization, with HP EML E-SeriesEML library, with 4 drives using LTO-4 tapes, and i am having some problems. Yesterday I put 5 new tapes in the robot and formatted them. At that time, the robot got just those empty 5 tapes with empty space. (all the rest of the tapes are red, or with protection) Today in the morning after the night (1 backup run at night), and 2 of the new tapes are red (the properties are): Writes : 2 Overwrites : 1 Errors : 9 I format one of them, and check for each drive if the tape become red, no one of the drives do it. In the main pool properties, in media condition got: Valid for : 36 (months) Maximum overwrites : 250

    Read the article

  • split large text file without missing data record

    - by Santosh
    I have a 140 MB text file, which contains detail information of books in library. For each book details there is a standard format data details in text file. I need to parse it and insert the data in Database. Here, parsing text file is not an issue. I am facing problem in parsing this large file. So i decided to split the file in small file around 2 MB each file. But i can't manually split this large file in so many pieces. I got HJsplit tool, which split the file but this also doesn't helped as this split the file but 1 book details half part is in one file and rest part is in second file. so if i split this way then information will be missed. How to split the large so that i cant miss the information ? Is there any tool which help me in this condition.

    Read the article

  • Dropped WD External Harddisk, now it's shown as "Not initialized"

    - by Phelios
    So, the WD my passport external harddisk is dropped, and after that, the computer is unable to read it anymore. I was hopping if I can just find another case to try if the harddisk is still readable, but looks like the hard drive itself is not a normal SATA or PATA drive. I think it's modified. So, I can't find another case that I can try on. In the computer, I still can see the drive in the "Disk Management", but it's shown as Uninitialized, no size, and no drive letter. I've also tried a couple of recovery tools. Some can't detect at all, there is one (find and mount software) that can detect but shows 0 size. None of them can recover the data. WD is willing to replace it with a new one, but I still need to recover the data. Any way I can recover the data? UPDATE: I tried initialized it from the windows Disk Manager, but it give error "The request could not be performed because of an I/O device error."

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Excel 2007: Exporting more than 100 columns to a .prn file but data is concatenated

    - by Don1
    I want to export an Excel worksheet to a space delimited (.prn) file. The worksheet is pretty big (187 columns) and when I set the column widths and try to export the worksheet to a .prn file, the data gets cut at the 98th column (i.e. about 200 characters wide for my data) and the rest is placed directly underneath. It's like I ripped a page in half from top to bottom and placed the right-hand side directly under the left-hand side. How would I get it to export everything without getting concatenated?

    Read the article

  • ext4: error loading journal

    - by cloudyOutside
    I have an external hard drive with two partitions: A small FAT32 which is mostly empty and works fine and a large ext4 with tons of data, most of which isn't backed up. The ext4 is visible, but can't be mounted. I get an "error loading journal" error. The drive is a Western Digital Caviar Blue 500GB. Roughly 30GB of that is FAT32 and the rest is the ext4. The light on the enclosure turns red when reading from the bad partition. It was made by Cavalry. There wasn't any warning, but coincidentally, I've been thinking lately that I should get two large capacity drives for real backups. Is there anything that can be done? I'm not even sure I have enough storage to backup everything even if it is redeemable.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >