Search Results

Search found 89075 results on 3563 pages for 'data files'.

Page 115/3563 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • String or binary data would be truncated.

    - by Derek Dieter
    This error message is relatively straight forward. The way it normally happens is when you are trying to insert data from a table that contains values that have larger data lengths than the table you are trying to insert into. An example of this would be trying to insert data from a permanent table, into [...]

    Read the article

  • I cannot play some WMA files

    - by Lucio
    I did a backup of several music files from my older Windows XP system. Now I can play all the .MP3 files but not all the .WMA. There are some kinds of WMA files that can be reproduced without problems, may refute this in the picture below. The right file can be played, the left isn't. I have long been looking different sites, Q&A from here, installing many packages but not luck. An example is this answer. What can I do? My system: Ubuntu 11.10 32b. Player: Banshee & VLC

    Read the article

  • Linux: Limiting data throughput (pipe) in bytes per second?

    - by sdaau
    Hi all, I was wandering if there is a Linux program that can limit data throughput of a pipe - in actual bytes per second?. From what I gather, applicable for the purposes would be bfr, however, it has been removed from Debian (Removal candidate: bfr) cpipe, however, it seems the lowest resolution it will support is kB/s, meaning that buffer writes can still reach MB/s ([SOLVED] Is there a program to limit terminal pipe speed? - Page 2 - Ubuntu Forums) What I'd want is to be able to specify something like cat example.txt | ratelimit -Bps 100 > /dev/ttyUSB0 ... and actually have a single byte from example.txt sent each 1/100 = 0.01 sec (or 10 ms) to 'output'.. Thanks in advance for any suggestions, Cheers!

    Read the article

  • How can I open binary image files? (.img)

    - by Simon Cahill
    I'm a Windows/Mac/Ubuntu and Androoid user, so I know what I'm talking about, when I say: How do I open binary image files? (.img) They just won't open, on any OS... I'm an Android dev... I'm currently working on a ROM, (I also program, using Windows) but I need to extract files, from .img files. I've converted them to .ext4.img but they just aren't recognized by Linux (Definitly not by Android), by Mac OS or Windows. In other words, I can't open, extract or mount them. Can anyone help me? I'm kinda confused...

    Read the article

  • Data recovery; nearly 1 tb of movies on a WD 3.5 tb personal cloud drive disappears with scanty traces

    - by Effector Dhanushanth
    I have a great collection of movies that I had stored in a logical mesh of folder on my 3.5 tb WD personal cloud drive. I woke up 1 morning and found that everything was fine with my data on this drive, except for my movie collection: There were two great folders, one "2sort" nd the other "segregated". out of all the segregated sub folders, only letter C D and 2 or 3 others remain. and the 2 sort folder, which has umpteen subfolders, amounting to more than 0.5 tb. is.. it's just gone!! this is a great downfall.. now this is a personal cloud drive and has no usb port etc. unfortunately to hardwire and recover files.. now I'm sure there are softwares out there that can help me recover my beloved movies from such an interestingly "hard-to-reach" (should I say?) device? what may that software be compadre, my happiness lies within your answer.. thank you.. remember, recovery software or (WD) personal cloud. :) these ovies were All, "hand-picked", over the course of ten years.. I just never catalogued my collection.. if I could just get the "list" of my lost collection, that'd be enough.. recovering em would be a bonus.. but they out to be damaged if I were to somehow recover you know? still, I'm certain they're all intact.. I guess the file index just got corrupted.. There surely is a veil of some sort that need to be thrown or pushed aside to reveal my movies.. what software can do/does that? thanks immensely!

    Read the article

  • Loading from Multiple Data Sources with Oracle Loader for Hadoop

    - by mannamal
    Oracle Loader for Hadoop can be used to load data from multiple data sources (for example Hive, HBase), and data in multiple formats (for example Apache weblogs, JSON files).   There are two ways to do this: (1) Use an input format implementation.  Oracle Loader for Hadoop includes several input format implementations.  In addition, a user can develop their own input format implementation for proprietary data sources and formats. (2) Leverage the capabilities of Hive, and use Oracle Loader for Hadoop to load from Hive. These approaches are discussed in our Oracle Open World 2013 presentation. 

    Read the article

  • Validation and Error Generation when using the Data Mapper Pattern

    - by AndyPerlitch
    I am working on saving state of an object to a database using the data mapper pattern, but I am looking for suggestions/guidance on the validation and error message generation step (step 4 below). Here are the general steps as I see them for doing this: (1) The data mapper is used to get current info (assoc array) about the object in db: +=====================================================+ | person_id | name | favorite_color | age | +=====================================================+ | 1 | Andy | Green | 24 | +-----------------------------------------------------+ mapper returns associative array, eg. Person_Mapper::getPersonById($id) : $person_row = array( 'person_id' => 1, 'name' => 'Andy', 'favorite_color' => 'Green', 'age' => '24', ); (2) the Person object constructor takes this array as an argument, populating its fields. class Person { protected $person_id; protected $name; protected $favorite_color; protected $age; function __construct(array $person_row) { $this->person_id = $person_row['person_id']; $this->name = $person_row['name']; $this->favorite_color = $person_row['favorite_color']; $this->age = $person_row['age']; } // getters and setters... public function toArray() { return array( 'person_id' => $this->person_id, 'name' => $this->name, 'favorite_color' => $this->favorite_color, 'age' => $this->age, ); } } (3a) (GET request) Inputs of an HTML form that is used to change info about the person is populated using Person::getters <form> <input type="text" name="name" value="<?=$person->getName()?>" /> <input type="text" name="favorite_color" value="<?=$person->getFavColor()?>" /> <input type="text" name="age" value="<?=$person->getAge()?>" /> </form> (3b) (POST request) Person object is altered with the POST data using Person::setters $person->setName($_POST['name']); $person->setFavColor($_POST['favorite_color']); $person->setAge($_POST['age']); *(4) Validation and error message generation on a per-field basis - Should this take place in the person object or the person mapper object? - Should data be validated BEFORE being placed into fields of the person object? (5) Data mapper saves the person object (updates row in the database): $person_mapper->savePerson($person); // the savePerson method uses $person->toArray() // to get data in a more digestible format for the // db gateway used by person_mapper Any guidance, suggestions, criticism, or name-calling would be greatly appreciated.

    Read the article

  • Two interesting big data sessions around Openworld

    - by Jean-Pierre Dijcks
    For those who want to talk (not listen) about big data, here are 2 very cool sessions: BOF9877 - A birds of a feather session around all things big data. It is on Monday, Oct 1, 6:15 PM - 7:00 PM - Marriott Marquis - Golden Gate. While all guests on the panel are special, we will have very special guest on the panel. He is a proud owner of a Big Data Appliance (see here). Then there is a Big Data SIG meeting (the invite from Gwen): I'd like to invite everyone to our OOW12 meet up. We'll meet on Tuesday, October 2nd, 8:45 to 9:45 at Moscone West Level 3, Overlook 3. We will network, socialize and discuss plans for the group. Which topics interest us for webinars? Which conferences do we want to meet in? What other activities we are interested in? We can also discuss big data topics, show off our great work, and seek advice on the challenges. Other than figuring out what we are collectively interested in, the discussion will be pretty open. Here is the official invite. See you at Openworld!!

    Read the article

  • Access to Salesforce.com Data Through Tableau Desktop

    - by dataintegration
    This article will explain how to connect to any of the RSSBus OData Connectors with Tableau's business intelligence tool. While the example uses the Salesforce Connector, the same process can be followed for any of the OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and Tableau Desktop from Tableau. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce.com account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Tableau and select the Connect to data option at the top left of the window. Step 4: Here you will click on the option labeled OData under the section labeled On a server. Step 5: A new pop up will appear. The box under Step 1 of the pop-up must contain the OData URL of the Salesforce Connector table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the OData entry URL, you will need to append the table name that you want Tableau to connect with to the OData entry URL. In this example, we will connect to the Account table. Thus, the URL we enter will be: http://localhost:8181/sfconnector/data/conn/odata.rsc/Account. You will also need to add authentication options in this step. To do this, select the Use a Username and Password option in Step 2 of the pop-up and enter the Username and Password of the user who has access to the Salesforce Connector. When you are done, click the Connect button in Step 3 of the pop-up. Step 6: When the connection to the Salesforce Connector is successful, give the connection a name and click the OK button. Step 7: The table columns will be listed on the left side under the Dimensions section of the workspace. Step 8: To view your Salesforce.com data, you can right click under the table name in the Data section at the top left of the dashboard and select the View Data option. Your Saleforce.com data will appear in Tableau.

    Read the article

  • What is the good way of sharing specific data between ViewModels

    - by voroninp
    We have IAppContext which is injected into ViewModel. This service contains shared data: global filters and other application wide properties. But there are cases when data is very specific. For example one VM implements Master and the second one - Details of selected tree item. Thus DetailsVm must know about the selected item and its changes. We can store this information either in IAppContext or inside each concerned VM. In both cases update notifications are sent via Messenger. I see pros and cons for any of the approaches and can not decide which one is better. 1st: + explicitly exposed shared proerties, easy to follow dependencies - IAppContxt becomes cluttered with very specific data. 2nd: the exact opposite of the first and more memory load due to data duplication. May be someone can offer design alternatives or tell that one of the variants is objectively superior to the other cause I miss something important?

    Read the article

  • Why doesn't file detect the mime-type of mp3 properly?

    - by Grumbel
    Something odd I recently encountered, running file --mime-type on a collection of MP3 gets the mime-time wrong a third of the time: $ for i in */*.mp3; do cat "$i"| file --mime-type -; done | sort | uniq -c 140 /dev/stdin: application/octet-stream 309 /dev/stdin: audio/mpeg There doesn't seem to be any obvious reason, as even MP3s from the same source, will sometimes fail and sometimes not. Bug, feature or anything obvious I am missing here?

    Read the article

  • A lot of 302 redirects

    - by user3651934
    I have a website for which one month stat shows: Unique Visitors 6274 Total Visitors 7260 Pages visited 9520 Hits 88891 Whats concerns me about is the HTTP status code: 302 Moved temporarily (redirect) 36302 How come 40% hits are being redirected. If it is not normal, what could be the possible reasons? ------------------------ adding more information ------------------------ Ok, here is the code I'm using in my .htaccess file for clean URLs. Is this causing as many as 36302 redirect hits? RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^([^\.]+)$ $1.php [L] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.+[^/])$ $1/ [R] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?page=$1 [L,QSA] RewriteRule ^(.*)/$ index.php?page=$1 [L,QSA]

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
     It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software.   Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors.  However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools.  Read all about it right here http://bit.ly/Ay45SG

    Read the article

  • Off the Charts: Getting Cost Data into Google Analytics

    Off the Charts: Getting Cost Data into Google Analytics With Analytics' new Cost Data Upload feature, users can measure and analyze non-Google cost data to calculate paid campaign effectiveness. Developers are able to build solutions to upload exported cost data into Analytics so marketers can have a unified view of their campaign spend - all within the Google Analytics interface. Join Google Analytics' Developer Advocate Pete Frisella to dive into the implementation of this new feature through the robust Analytics APIs. From: GoogleDevelopers Views: 0 0 ratings Time: 30:00 More in Science & Technology

    Read the article

  • Archiving SQL Server Data Using Partitioning

    Many companies now have a requirement to keep data for long periods of time. While this data does have to be available if requested, it usually does not need to be accessible by the application for any current transactions. Data that falls into this category are a good candidate for archival. Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

  • Options for transparent data encryption on SQL 2005 and 2008 DBs.

    - by Dan
    Recently, in Massachusetts a law was passed (rather silently) that data containing personally identifiable information, must be encrypted. PII is defined by the state, as containing the residents first and last name, in combination with either, A. SSN B. drivers license or ID card # C. Debit or CC # Due to the nature of the software we make, all of our clients use SQL as the backend. Typically servers will be running SQl2005 Standard or above, sometimes SQL 2008. Almost all client machines use SQL2005 Express. We use replication between client and server. Unfortunately, to get TDE you need to have SQL Enterprise on each machine, which is absolutely not an option. I'm looking for recommendations of products that will encrypt a DB. Right now, I'm not interested in whole disk encryption at all.

    Read the article

  • Got problem with installation. "No root file system is defined."

    - by user92322
    I'm very new with Ubuntu and generally with linux. I saw ubuntu and it seems like this OS is really good and stable, and so I decided to install it alongside my windows 7 OS. I have a few problems with the installation. Here is what I did: I downloaded the 64bit version from Ubuntu official website, and burned it on a dvd. I set the boot sequence to first load from my CD-Rom. Ubuntu installation started, and I chose "Install Ubuntu" in the menu. (where there is also a "Try Ubuntu" option) I clicked forward until I got into the installation type screen As you can see, the installation wont show my actual details about my hard drive! I have 1 hard drive with 750 GB - 80 GB - My main drive with windows 7 OS 600GB - All of my stuff 20GB Free space that I saved for Ubuntu But the installation wont show that!

    Read the article

  • No such file but the file is there!

    - by user288757
    I'm trying to compile a C++ file with some includes. My main file (well I didn't make it hdf5_getters includes a file which includes the file hdf5.h, also not my design but it's a downloaded library. Every time I try to compile it I get the error message that the file hdf5.h does not exist while it clearly does. I started reading on the internet and people say it can happen because it's a 32bit binary running on a 64bit architecture. But I'm running a 32bit Ubuntu so that can't be it... I'm out of ideas, if anyone can help me please :) This is the errormessage with commands: make hdf5_getters g++ -c -Wall -std=c++0x -O2 -c hdf5_getters.cc In file included from H5Cpp.h:20:0, from hdf5_getters.cc:34: H5Include.h:17:18: fatal error: hdf5.h: No such file or directory #include <hdf5.h> ^ compilation terminated. make: *** [hdf5_getters.o] Error 1

    Read the article

  • "Google files": Building a web interface to find/ack/grep

    - by user27915816
    I am working on a project where we would like to have build a web interface that gives the user the ability to "Google" files in a directory in a remote machine. For example, the user would type a string in a box, and then the system would find all files that contain that string and present them in the browser. The system would then give the user the ability click on any of the files to open them/display them in the browser. We want to avoid reinventing the wheel if possible, but don't really know where to start (none of us in the team have much experience building websites). What software packages, libraries or tools exist that can help us get this done?

    Read the article

  • Generating Data for Database Tests

    It is more and more essential for developers to work on development databases that have realistic data in both type and quantity, but without using real data. It isn't exactly easy, even with third-party tools to hand. Phil Factor shows how it can be done, taking the classic PUBS database and giving it a more realistic set of data. Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • Files don't sync with U1.

    - by wrenchman76
    I am running 10.1 (all current updates have been installed) and I'm unable to sync any files to U1. U1 shows my computer as being added to my account and I bought enough space to hold all the files I want backed up, but for some reason it still doesn't sync. I have marked the files to be synced and even checked the devices tab in the U1 preferences; it shows my computer as being part of my account but the connect button is inoperable and restart doesn't produce any effect when clicked. I have read all the FAQ I can find relating to my problem and have tried all the suggestions that seem to adress problems similar to mine, as well as adding and removing my computer from my account a bunch of times. None of these produced any effect one way or another. I'm not very techno savvy so step by step and/or lay mens terms are greatly appreciated, Thank you

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >