Search Results

Search found 70288 results on 2812 pages for 'custom data generator'.

Page 224/2812 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • jstree dynamic JSON data from django

    - by danspants
    I'm trying to set up jsTree to dynamically accept JSON data from django. This is the test data i have django returning to jstree: result=[{ "data" : "A node", "children" : [ { "data" : "Only child", "state" : "closed" } ], "state" : "open" },"Ajax node"] response=HttpResponse(content=result,mimetype="application/json") this is the jstree code I'm using: jQuery("#demo1").jstree({ "json_data" : { "ajax" : { "url" : "/dirlist", "data" : function (n) { return { id : n.attr ? n.attr("id") : 0 }; }, error: function(e){alert(e);} } }, "plugins" : [ "themes","json_data"] }); All I get is the ajax loading symbol, the ajax error response is also triggered and it alerts "undefined". I've also tried simpleJson encoding in django but with the same result. If I change the url so that it is receiving a JSON file with identical data, it works as expected. Any ideas on what the issue might be?

    Read the article

  • How to filter the jqGrid data NOT using the built in search/filter box

    - by Jimbo
    I want users to be able to filter grid data without using the intrinsic search box. I have created two input fields for date (from and to) and now need to tell the grid to adopt this as its filter and then to request new data. Forging a server request for grid data (bypassing the grid) and setting the grid's data to be the response data wont work - because as soon as the user tries to re-order the results or change the page etc. the grid will request new data from the server using a blank filter. I cant seem to find grid API to achieve this - does anyone have any ideas? Thanks.

    Read the article

  • Machine learning challenge: diagnosing program in java/groovy (datamining, machine learning)

    - by Registered User
    Hi All! I'm planning to develop program in Java which will provide diagnosis. The data set is divided into two parts one for training and the other for testing. My program should learn to classify from the training data (BTW which contain answer for 30 questions each in new column, each record in new line the last column will be diagnosis 0 or 1, in the testing part of data diagnosis column will be empty - data set contain about 1000 records) and then make predictions in testing part of data :/ I've never done anything similar so I'll appreciate any advice or information about solution to similar problem. I was thinking about Java Machine Learning Library or Java Data Mining Package but I'm not sure if it's right direction... ? and I'm still not sure how to tackle this challenge... Please advise. All the best!

    Read the article

  • qT quncompress gzip data

    - by talei
    Hello, I stumble upon a problem, and can't find a solution. So what I want to do is uncompress data in qt, using qUncompress(QByteArray), send from www in gzip format. I used wireshark to determine that this is valid gzip stream, also tested with zip/rar and both can uncompress it. Code so far, is like this: static const char dat[40] = { 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xaa, 0x2e, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xb6, 0x4a, 0x4b, 0xcc, 0x29, 0x4e, 0xad, 0x05, 0x00, 0x00, 0x00, 0xff, 0xff, 0x03, 0x00, 0x2a, 0x63, 0x18, 0xc5, 0x0e, 0x00, 0x00, 0x00 }; //this data contains string: {status:false}, in gzip format QByteArray data; data.append( dat, sizeof(dat) ); unsigned int size = 14; //expected uncompresed size, reconstruct it BigEndianes //prepand expected uncompressed size, last 4 byte in dat 0x0e = 14 QByteArray dataPlusSize; dataPlusSize.append( (unsigned int)((size >> 24) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 16) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 8) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 0) & 0xFF)); QByteArray uncomp = qUncompress( dataPlusSize ); qDebug() << uncomp; And uncompression fails with: qUncompress: Z_DATA_ERROR: Input data is corrupted. AFAIK gzip consist of 10 byte header, DEFLATE peyload, 12 byte trailer ( 8 byte CRC32 + 4 byte ISIZE - uncompresed data size ). Striping header and trailer should leave me with DEFLATE data stream, qUncompress yields same error. I checked with data string compressed in PHP, like this: $stringData = gzcompress( "{status:false}", 1); and qUncompress uncompress that data.(I didn't see and gzip header though i.e. ID1 = 0x1f, ID2 = 0x8b ) I checked above code with debug, and error occurs at: if ( #endif ((BITS(8) << 8) + (hold >> 8)) % 31) { //here is error, WHY? long unsigned int hold = 35615 strm->msg = (char *)"incorrect header check"; state->mode = BAD; break; } inflate.c line 610. I know that qUncompress is simply a wrapper to zlib, so I suppose it should handle gzip without any problem. Any comments are more then welcome. Best regards

    Read the article

  • RDLC - Adding a Data Source in VS2010

    - by Kezzer
    Greetings. I have an RDLC file and am wanting to add a data source to it, although without any luck so far. The data source is a custom class written by myself (just to add, we do this all the time). We recently converted over to the VS2010 RDLC format which caused some problems, but we've made some changes to our implementation that workaround the more major issues. So, back to the issue at hand, when I attempt to add my data source to the DummyDataSource list in the RDLC view in VS2010 it just does nothing, however it does add the data source to the list of data sources, but you can't select it from the drop-down list in the RDLC view which means I can't add the data source at all. Has anyone come across this problem? Is there anything I need to check? I've searched with fervour and had no luck.

    Read the article

  • Best Practices - Data Annotations vs OnChanging in Entity Framework 4

    - by jptacek
    I was wondering what the general recommendation is for Entity Framework in terms of data validation. I am relatively new to EF, but it appears there are two main approaches to data validation. The first is to create a partial class for the model, and then perform data validations and update a rule violation collection of some sort. This is outlined at http://msdn.microsoft.com/en-us/library/cc716747.aspx The other is to use data annotations and then have the annotations perform data validation. Scott Guthrie explains this on his blog at http://weblogs.asp.net/scottgu/archive/2010/01/15/asp-net-mvc-2-model-validation.aspx. I was wondering what the benefits are of one over the other. It seems the data annotations would be the preferred mechanism, especially as you move to RIA Services, but I want to ensure I am not missing something. Of course, nothing precludes using both of them together. Thanks John

    Read the article

  • Database design for very large amount of data

    - by Hossein
    Hi, I am working on a project, involving large amount of data from the delicious website.The data available is at files are "Date,UserId,Url,Tags" (for each bookmark). I normalized my database to a 3NF, and because of the nature of the queries that we wanted to use In combination I came down to 6 tables....The design looks fine, however, now a large amount of data is in the database, most of the queries needs to "join" at least 2 tables together to get the answer, sometimes 3 or 4. At first, we didn't have any performance issues, because for testing matters we haven't had added too much data in the database. No that we have a lot of data, simply joining extremely large tables does take a lot of time and for our project which has to be real-time is a disaster.I was wondering how big companies solve these issues.Looks like normalizing tables just adds complexity, but how does the big company handle large amounts of data in their databases, don't they do the normalization? thanks

    Read the article

  • What's an efficient way of calculating the nearest point?

    - by Griffo
    I have objects with location data stored in Core Data, I would like to be able to fetch and display just the nearest point to the current location. I'm aware there are formulas which will calculate the distance from current lat/long to a stored lat/long, but I'm curious about the best way to perform this for a set of 1000+ points stored in Core Data. I know I could just return the points from Core Data to an array and then loop through that looking for the min value for distance between the points but I'd imagine there's a more efficient method, possibly leveraging Core Data in some way. Any insight would be appreciated. EDIT: I don't know how I missed this on my initial search but this SO question suggests just iterating through an array of Core Data objects but limiting the array size with a bounding box based on the current location. Is this the best I can do?

    Read the article

  • Need help receiving ByteArray data

    - by k80sg
    Hi folks, I am trying to receive byte array data from a machine. It sends out 3 different types of data structure each with different number of fields which consist of mostly int and a few floats, and byte sizes, the first being 320 bytes, 420 for the second type and 560 for the third. When the sending program is launched, it fires all 3 types of data simultaneouly with an interval of 1 sec. Example: Sending order: Pack1 - 320 bytes 1 sec later Pack2 - 420 bytes 1 sec later Pack3 - 560 bytes 1 sec later Pack1 - 320 bytes ... .. . How do I check the incoming byte size before passing it to: byte[] handsize = new byte[bytesize]; as the data I receive are all out of order, for instance using the following the read all int: System.out.println("Reading data in int format:" + " " + datainput.readInt()); I get many different sets of values whenever I run my prog although with some valid field data but they are all over the place. I am not too sure how exactly should I do it but I tried the following and apparently my data fields are not receiving in correct sequence: BufferedInputStream bais = new BufferedInputStream(requestSocket.getInputStream()); DataInputStream datainput = new DataInputStream(bais); byte[] handsize = new byte[560]; datainput.readFully(handsize); int n = 0; int intByte[] = new int[140]; for (int i = 0; i < 140 ; i++) { System.out.println("Reading data in int format:" + " " + datainput.readInt()); intByte[n] = datainput.readInt(); n = n + 1; System.out.println("The value in array is:" + intByte[0]); System.out.println("The value in array is:" + intByte[1]); System.out.println("The value in array is:" + intByte[2]); System.out.println("The value in array is:" + intByte[3]); Also from the above code, the order of the values printed out with System.out.println("Reading data in int format:" + " " + datainput.readInt()); and System.out.println("The value in array is:" + intByte[0]); System.out.println("The value in array is:" + intByte[1]); are different. Any help will be appreciated. Thanks

    Read the article

  • C# SqlBulkCopy and Data Entities

    - by KP
    Guys, My current project consists of 3 standard layers: data, business, and presentation. I would like to use data entities for all my data access needs. Part of the functionality of the app will that it will need to copy all data within a flat file into a database. The file is not so big so I can use SqlBulkCopy. I have found several articles regarding the usage of SqlBulkCopy class in .NET. However, all the articles are using DataTables to move data back and forth. Is there a way to use data entities along with SqlBulkCopy or will I have to use DataTables?

    Read the article

  • Memory efficient import many data files into panda DataFrame in Python

    - by richardh
    I import into a panda DataFrame a directory of |-delimited.dat files. The following code works, but I eventually run out of RAM with a MemoryError:. import pandas as pd import glob temp = [] dataDir = 'C:/users/richard/research/data/edgar/masterfiles' for dataFile in glob.glob(dataDir + '/master_*.dat'): print dataFile temp.append(pd.read_table(dataFile, delimiter='|', header=0)) masterAll = pd.concat(temp) Is there a more memory efficient approach? Or should I go whole hog to a database? (I will move to a database eventually, but I am baby stepping my move to pandas.) Thanks! FWIW, here is the head of an example .dat file: cik|cname|ftype|date|fileloc 1000032|BINCH JAMES G|4|2011-03-08|edgar/data/1000032/0001181431-11-016512.txt 1000045|NICHOLAS FINANCIAL INC|10-Q|2011-02-11|edgar/data/1000045/0001193125-11-031933.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-11|edgar/data/1000045/0001193125-11-005531.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-27|edgar/data/1000045/0001193125-11-015631.txt 1000045|NICHOLAS FINANCIAL INC|SC 13G/A|2011-02-14|edgar/data/1000045/0000929638-11-00151.txt

    Read the article

  • Grails Date Property Editor

    - by Jan
    Hey folks, i'm using the jQuery UI datepicker instead of the <g:datePicker>, which is changing the selected date in a textbox. Now I want to save this neatly back into my database, and came across custom property editors. click me hard to see the topic on StackOverflow However, adding this custom PropertyEditor didn't change anything, dates are still displayed like '2010-01-01 00:00:00.0' and if I try to save a date it crashes with Cannot cast object '05.05.2010' with class 'java.lang.String' to class 'java.util.Date'. Is there any additional magic needed, like a special naming of the textbox or something like that? Thanks for your help, Jan

    Read the article

  • C# Winforms ADO.NET - DataGridView INSERT starting with null data

    - by Geo Ego
    I have a C# Winforms app that is connecting to a SQL Server 2005 database. The form I am currently working on allows a user to enter a large amount of different types of data into various textboxes, comboboxes, and a DataGridView to insert into the database. It all represents one particular type of machine, and the data is spread out over about nine tables. The problem I have is that my DataGridView represents a type of data that may or may not be added to the database. Thus, when the DataGridView is created, it is empty and not databound, and so data cannot be entered. My question is, should I create the table with hard-coded field names representing the way that the data looks in the database, or is there a way to simply have the column names populate with no data so that the user can enter it if they like? I don't like the idea of hard-coding them in case there is a change in the database schema, but I'm not sure how else to deal with this problem.

    Read the article

  • ASP.net - First build fails with 'Unknown server tag'

    - by Sophia
    I have multiple custom controls used on a ASPX (and C#) page registered from within the page rather than in Web.Config. On the first build (or rebuild), build fails with error messages for wherever I've used the custom controls. Subsequent builds are successful. The error message: Unknown server tag 'prefix:ExampleControl'. What might cause this, and how can I fix it? Register syntax: <%@ Register Src="ControlsFolder/ExampleControl.ascx" TagName="ExampleControl" TagPrefix="prefix" %> <!-- etc --> Usage syntax: <prefix:ExampleControl runat="server" ID="ExampleControl1" /> <!-- etc -->

    Read the article

  • Providing dynamic data to webpage

    - by Marius
    Hi, I have a web page that displays dynamic data which changes every 2 seconds. Data is selected from various data sources including Oracle. Currently, the page reloads every 10 seconds and runs a PHP script which retrieves the data and displays the page. I have other pages that gives a different view on the same data. This means the same query is run again for them as well. If I have 4 of these pages with 10 concurrent users each, suddenly the data retrieval happens 40 times every 10 seconds, obviously not ideal. I have some ideas on how to improve this situation, but I thought I would ask from some ideas from other experts that might have come across a similar situation. I'm not bound to PHP, and my server is on a Linux platform. Regards Marius

    Read the article

  • Providing dynamic data to webpage

    - by Marius
    Hi, I have a web page that displays dynamic data which changes every 2 seconds. Data is selected from various data sources including Oracle. Currently, the page reloads every 10 seconds and runs a PHP script which retrieves the data and displays the page. I have other pages that gives a different view on the same data. This means the same query is run again for them as well. If I have 4 of these pages with 10 concurrent users each, suddenly the data retrieval happens 40 times every 10 seconds, obviously not ideal. I have some ideas on how to improve this situation, but I thought I would ask from some ideas from other experts that might have come across a similar situation. I'm not bound to PHP, and my server is on a Linux platform. Regards Marius

    Read the article

  • How to use json object notation to retrieve dbpedia json data

    - by Margi
    In my php code, I am retrieving json data as below. <?php $url = "http://dbpedia.org/data/Los_Angeles.json"; $data = file_get_contents($url); echo $data; ?> The javascript code consumes this json data returned from php and gets the json object as below. var doc = eval('(' + request.responseText + ')'); How to retrieve the following json data using dot notations. The keys contain URLs. "http://dbpedia.org/ontology/populationTotal" : [ { "type" : "literal", "value" : 3792621 , "datatype" : "http://www.w3.org/2001/XMLSchema#integer" } ] , "http://dbpedia.org/ontology/PopulatedPlace/areaTotal" : [ { "type" : "literal", "value" : "1301.9688931491348" , "datatype" : "http://dbpedia.org/datatype/squareKilometre" }

    Read the article

  • Write data to an xml file.

    - by Bobby
    My aim is to write an XML file with few tags whose values are the regional language. I'm using Python to do this and using IDLE(Pythong GUI) for programming. While I try to write the words in an xmls file it gives the following error. UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-4: ordinal not in range(128) For now, I'm not using any xml writer library instead I'm opening a file "test.xml" and writing the data into it. This error is encountered by the line: f.write("\t\" + data + "\"). If I replace the above write statement with print statement then it prints the data properly on Python Shell. I'm reading the data from an excel file which is not in the UTF8,16, or 32 encoding formats. Its in some other format. cp1252 is reading the data properly. Any help in getting this data writtent oan xml would be highly appreciated. Thanks in advance. Bob

    Read the article

  • 32 and 64 bit assemblies in one windows installer

    - by Giorgi
    Hello, I have an application written in C# which depends on sqlite managed provider. The sqlite provider is platform dependent (there are two dlls for 32 and 64 bit applications with the same name). The application loads the desired one at runtime based on OS. The problem is that while creating an installer I cannot add 64 bit mode dll to the setup project as I am getting the following error: File '' targeting '' is not compatible with the project's target platform ''. I would use other installer but I have a custom action which must be invoked during the setup. So I wanted to know if there is an installer which will let me add 32 and 64 bit dll to it and execute custom action written in C#. One possible solution is to have two installers but I would like to avoid it if possible. Any suggestions?

    Read the article

  • Blackberry Player, custom data source

    - by Alex
    Hello I must create a custom media player within the application with support for mp3 and wav files. I read in the documentation i cant seek or get the media file duration without a custom datasoruce. I checked the demo in the JDE 4.6 but i have still problems... I cant get the duration, it return much more then the expected so i`m sure i screwed up something while i modified the code to read the mp3 file locally from the filesystem. Somebody can help me what i did wrong ? (I can hear the mp3, so the player plays it correctly from start to end) I must support OSs = 4.6. Thank You Here is my modified datasource LimitedRateStreaminSource.java * Copyright © 1998-2009 Research In Motion Ltd. Note: For the sake of simplicity, this sample application may not leverage resource bundles and resource strings. However, it is STRONGLY recommended that application developers make use of the localization features available within the BlackBerry development platform to ensure a seamless application experience across a variety of languages and geographies. For more information on localizing your application, please refer to the BlackBerry Java Development Environment Development Guide associated with this release. */ package com.halcyon.tawkwidget.model; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import javax.microedition.io.Connector; import javax.microedition.io.file.FileConnection; import javax.microedition.media.Control; import javax.microedition.media.protocol.ContentDescriptor; import javax.microedition.media.protocol.DataSource; import javax.microedition.media.protocol.SourceStream; import net.rim.device.api.io.SharedInputStream; /** * The data source used by the BufferedPlayback's media player. / public final class LimitedRateStreamingSource extends DataSource { /* The max size to be read from the stream at one time. */ private static final int READ_CHUNK = 512; // bytes /** A reference to the field which displays the load status. */ //private TextField _loadStatusField; /** A reference to the field which displays the player status. */ //private TextField _playStatusField; /** * The minimum number of bytes that must be buffered before the media file * will begin playing. */ private int _startBuffer = 200000; /** The maximum size (in bytes) of a single read. */ private int _readLimit = 32000; /** * The minimum forward byte buffer which must be maintained in order for * the video to keep playing. If the forward buffer falls below this * number, the playback will pause until the buffer increases. */ private int _pauseBytes = 64000; /** * The minimum forward byte buffer required to resume * playback after a pause. */ private int _resumeBytes = 128000; /** The stream connection over which media content is passed. */ //private ContentConnection _contentConnection; private FileConnection _fileConnection; /** An input stream shared between several readers. */ private SharedInputStream _readAhead; /** A stream to the buffered resource. */ private LimitedRateSourceStream _feedToPlayer; /** The MIME type of the remote media file. */ private String _forcedContentType; /** A counter for the total number of buffered bytes */ private volatile int _totalRead; /** A flag used to tell the connection thread to stop */ private volatile boolean _stop; /** * A flag used to indicate that the initial buffering is complete. In * other words, that the current buffer is larger than the defined start * buffer size. */ private volatile boolean _bufferingComplete; /** A flag used to indicate that the remote file download is complete. */ private volatile boolean _downloadComplete; /** The thread which retrieves the remote media file. */ private ConnectionThread _loaderThread; /** The local save file into which the remote file is written. */ private FileConnection _saveFile; /** A stream for the local save file. */ private OutputStream _saveStream; /** * Constructor. * @param locator The locator that describes the DataSource. */ public LimitedRateStreamingSource(String locator) { super(locator); } /** * Open a connection to the locator. * @throws IOException */ public void connect() throws IOException { //Open the connection to the remote file. _fileConnection = (FileConnection)Connector.open(getLocator(), Connector.READ); //Cache a reference to the locator. String locator = getLocator(); //Report status. System.out.println("Loading: " + locator); //System.out.println("Size: " + _contentConnection.getLength()); System.out.println("Size: " + _fileConnection.totalSize()); //The name of the remote file begins after the last forward slash. int filenameStart = locator.lastIndexOf('/'); //The file name ends at the first instance of a semicolon. int paramStart = locator.indexOf(';'); //If there is no semicolon, the file name ends at the end of the line. if (paramStart < 0) { paramStart = locator.length(); } //Extract the file name. String filename = locator.substring(filenameStart, paramStart); System.out.println("Filename: " + filename); //Open a local save file with the same name as the remote file. _saveFile = (FileConnection) Connector.open("file:///SDCard/blackberry/music" + filename, Connector.READ_WRITE); //If the file doesn't already exist, create it. if (!_saveFile.exists()) { _saveFile.create(); } System.out.println("---------- 1"); //Open the file for writing. _saveFile.setReadable(true); //Open a shared input stream to the local save file to //allow many simultaneous readers. SharedInputStream fileStream = SharedInputStream.getSharedInputStream(_saveFile.openInputStream()); //Begin reading at the beginning of the file. fileStream.setCurrentPosition(0); System.out.println("---------- 2"); //If the local file is smaller than the remote file... if (_saveFile.fileSize() < _fileConnection.totalSize()) { System.out.println("---------- 3"); //Did not get the entire file, set the system to try again. _saveFile.setWritable(true); System.out.println("---------- 4"); //A non-null save stream is used as a flag later to indicate that //the file download was incomplete. _saveStream = _saveFile.openOutputStream(); System.out.println("---------- 5"); //Use a new shared input stream for buffered reading. _readAhead = SharedInputStream.getSharedInputStream(_fileConnection.openInputStream()); System.out.println("---------- 6"); } else { //The download is complete. System.out.println("---------- 7"); _downloadComplete = true; //We can use the initial input stream to read the buffered media. _readAhead = fileStream; System.out.println("---------- 8"); //We can close the remote connection. _fileConnection.close(); System.out.println("---------- 9"); } if (_forcedContentType != null) { //Use the user-defined content type if it is set. System.out.println("---------- 10"); _feedToPlayer = new LimitedRateSourceStream(_readAhead, _forcedContentType); System.out.println("---------- 11"); } else { System.out.println("---------- 12"); //Otherwise, use the MIME types of the remote file. // _feedToPlayer = new LimitedRateSourceStream(_readAhead, _fileConnection)); } System.out.println("---------- 13"); } /** * Destroy and close all existing connections. */ public void disconnect() { try { if (_saveStream != null) { //Destroy the stream to the local save file. _saveStream.close(); _saveStream = null; } //Close the local save file. _saveFile.close(); if (_readAhead != null) { //Close the reader stream. _readAhead.close(); _readAhead = null; } //Close the remote file connection. _fileConnection.close(); //Close the stream to the player. _feedToPlayer.close(); } catch (Exception e) { System.err.println(e.getMessage()); } } /** * Returns the content type of the remote file. * @return The content type of the remote file. */ public String getContentType() { return _feedToPlayer.getContentDescriptor().getContentType(); } /** * Returns a stream to the buffered resource. * @return A stream to the buffered resource. */ public SourceStream[] getStreams() { return new SourceStream[] { _feedToPlayer }; } /** * Starts the connection thread used to download the remote file. */ public void start() throws IOException { //If the save stream is null, we have already completely downloaded //the file. if (_saveStream != null) { //Open the connection thread to finish downloading the file. _loaderThread = new ConnectionThread(); _loaderThread.start(); } } /** * Stop the connection thread. */ public void stop() throws IOException { //Set the boolean flag to stop the thread. _stop = true; } /** * @see javax.microedition.media.Controllable#getControl(String) */ public Control getControl(String controlType) { // No implemented Controls. return null; } /** * @see javax.microedition.media.Controllable#getControls() */ public Control[] getControls() { // No implemented Controls. return null; } /** * Force the lower level stream to a given content type. Must be called * before the connect function in order to work. * @param contentType The content type to use. */ public void setContentType(String contentType) { _forcedContentType = contentType; } /** * A stream to the buffered media resource. */ private final class LimitedRateSourceStream implements SourceStream { /** A stream to the local copy of the remote resource. */ private SharedInputStream _baseSharedStream; /** Describes the content type of the media file. */ private ContentDescriptor _contentDescriptor; /** * Constructor. Creates a LimitedRateSourceStream from * the given InputStream. * @param inputStream The input stream used to create a new reader. * @param contentType The content type of the remote file. */ LimitedRateSourceStream(InputStream inputStream, String contentType) { System.out.println("[LimitedRateSoruceStream]---------- 1"); _baseSharedStream = SharedInputStream.getSharedInputStream(inputStream); System.out.println("[LimitedRateSoruceStream]---------- 2"); _contentDescriptor = new ContentDescriptor(contentType); System.out.println("[LimitedRateSoruceStream]---------- 3"); } /** * Returns the content descriptor for this stream. * @return The content descriptor for this stream. */ public ContentDescriptor getContentDescriptor() { return _contentDescriptor; } /** * Returns the length provided by the connection. * @return long The length provided by the connection. */ public long getContentLength() { return _fileConnection.totalSize(); } /** * Returns the seek type of the stream. */ public int getSeekType() { return RANDOM_ACCESSIBLE; //return SEEKABLE_TO_START; } /** * Returns the maximum size (in bytes) of a single read. */ public int getTransferSize() { return _readLimit; } /** * Writes bytes from the buffer into a byte array for playback. * @param bytes The buffer into which the data is read. * @param off The start offset in array b at which the data is written. * @param len The maximum number of bytes to read. * @return the total number of bytes read into the buffer, or -1 if * there is no more data because the end of the stream has been reached. * @throws IOException */ public int read(byte[] bytes, int off, int len) throws IOException { System.out.println("[LimitedRateSoruceStream]---------- 5"); System.out.println("Read Request for: " + len + " bytes"); //Limit bytes read to our readLimit. int readLength = len; System.out.println("[LimitedRateSoruceStream]---------- 6"); if (readLength > getReadLimit()) { readLength = getReadLimit(); } //The number of available byes in the buffer. int available; //A boolean flag indicating that the thread should pause //until the buffer has increased sufficiently. boolean paused = false; System.out.println("[LimitedRateSoruceStream]---------- 7"); for (;;) { available = _baseSharedStream.available(); System.out.println("[LimitedRateSoruceStream]---------- 8"); if (_downloadComplete) { //Ignore all restrictions if downloading is complete. System.out.println("Complete, Reading: " + len + " - Available: " + available); return _baseSharedStream.read(bytes, off, len); } else if(_bufferingComplete) { if (paused && available > getResumeBytes()) { //If the video is paused due to buffering, but the //number of available byes is sufficiently high, //resume playback of the media. System.out.println("Resuming - Available: " + available); paused = false; return _baseSharedStream.read(bytes, off, readLength); } else if(!paused && (available > getPauseBytes() || available > readLength)) { //We have enough information for this media playback. if (available < getPauseBytes()) { //If the buffer is now insufficient, set the //pause flag. paused = true; } System.out.println("Reading: " + readLength + " - Available: " + available); return _baseSharedStream.read(bytes, off, readLength); } else if(!paused) { //Set pause until loaded enough to resume. paused = true; } } else { //We are not ready to start yet, try sleeping to allow the //buffer to increase. try { Thread.sleep(500); } catch (Exception e) { System.err.println(e.getMessage()); } } } } /** * @see javax.microedition.media.protocol.SourceStream#seek(long) */ public long seek(long where) throws IOException { _baseSharedStream.setCurrentPosition((int) where); return _baseSharedStream.getCurrentPosition(); } /** * @see javax.microedition.media.protocol.SourceStream#tell() */ public long tell() { return _baseSharedStream.getCurrentPosition(); } /** * Close the stream. * @throws IOException */ void close() throws IOException { _baseSharedStream.close(); } /** * @see javax.microedition.media.Controllable#getControl(String) */ public Control getControl(String controlType) { // No implemented controls. return null; } /** * @see javax.microedition.media.Controllable#getControls() */ public Control[] getControls() { // No implemented controls. return null; } } /** * A thread which downloads the remote file and writes it to the local file. */ private final class ConnectionThread extends Thread { /** * Download the remote media file, then write it to the local * file. * @see java.lang.Thread#run() */ public void run() { try { byte[] data = new byte[READ_CHUNK]; int len = 0; //Until we reach the end of the file. while (-1 != (len = _readAhead.read(data))) { _totalRead += len; if (!_bufferingComplete && _totalRead > getStartBuffer()) { //We have enough of a buffer to begin playback. _bufferingComplete = true; System.out.println("Initial Buffering Complete"); } if (_stop) { //Stop reading. return; } } System.out.println("Downloading Complete"); System.out.println("Total Read: " + _totalRead); //If the downloaded data is not the same size //as the remote file, something is wrong. if (_totalRead != _fileConnection.totalSize()) { System.err.println("* Unable to Download entire file *"); } _downloadComplete = true; _readAhead.setCurrentPosition(0); //Write downloaded data to the local file. while (-1 != (len = _readAhead.read(data))) { _saveStream.write(data); } } catch (Exception e) { System.err.println(e.toString()); } } } /** * Gets the minimum forward byte buffer which must be maintained in * order for the video to keep playing. * @return The pause byte buffer. */ int getPauseBytes() { return _pauseBytes; } /** * Sets the minimum forward buffer which must be maintained in order * for the video to keep playing. * @param pauseBytes The new pause byte buffer. */ void setPauseBytes(int pauseBytes) { _pauseBytes = pauseBytes; } /** * Gets the maximum size (in bytes) of a single read. * @return The maximum size (in bytes) of a single read. */ int getReadLimit() { return _readLimit; } /** * Sets the maximum size (in bytes) of a single read. * @param readLimit The new maximum size (in bytes) of a single read. */ void setReadLimit(int readLimit) { _readLimit = readLimit; } /** * Gets the minimum forward byte buffer required to resume * playback after a pause. * @return The resume byte buffer. */ int getResumeBytes() { return _resumeBytes; } /** * Sets the minimum forward byte buffer required to resume * playback after a pause. * @param resumeBytes The new resume byte buffer. */ void setResumeBytes(int resumeBytes) { _resumeBytes = resumeBytes; } /** * Gets the minimum number of bytes that must be buffered before the * media file will begin playing. * @return The start byte buffer. */ int getStartBuffer() { return _startBuffer; } /** * Sets the minimum number of bytes that must be buffered before the * media file will begin playing. * @param startBuffer The new start byte buffer. */ void setStartBuffer(int startBuffer) { _startBuffer = startBuffer; } } And in this way i use it: LimitedRateStreamingSource source = new LimitedRateStreamingSource("file:///SDCard/music3.mp3"); source.setContentType("audio/mpeg"); mediaPlayer = javax.microedition.media.Manager.createPlayer(source); mediaPlayer.addPlayerListener(this); mediaPlayer.realize(); mediaPlayer.prefetch(); After start i use mediaPlayer.getDuration it returns lets say around 24:22 (the inbuild media player in the blackberry say the file length is 4:05) I tried to get the duration in the listener and there unfortunatly returned around 64 minutes, so im sure something is not good inside the datasoruce....

    Read the article

  • data in mysql show after barcode split and matches character

    - by klox
    i need some code for the next step..this my first step: <script> $("#mod").change(function() { var barcode; barCode=$("#mod").val(); var data=barCode.split(" "); $("#mod").val(data[0]); $("#seri").val(data[1]); var str=data[0]; var matches=str.matches(/EE|[EJU]).*(D)/i); }); </script> after matches..i want the result can connect to data base then show data from table inside <div id="value">...how to do that?

    Read the article

  • How to prevent a dll from being loaded in other apps

    - by dhh
    Hello, currently I develop a C#.Net application in which I'm using a custom control I developed some time ago. I need the dll to be shipped within the new application - but understandably I do not want the dll file to be used for foreign apps. That's why I need the custom dll to be somehow compiled within the new application. Currently the dll is copied into the application directory. Any ideas? Should be trivial imho. Thanks & regards, Daniel

    Read the article

  • ItemFileWriteStore: how to change the data?

    - by jeff porter
    Hi, I'd like to change the data in my dojo.data.ItemFileWriteStore. Currently I have... var rawdataCacheItems = [{'cachereq':'14:52:03','name':'Test1','cacheID':'3','ver':'7'}]; var cacheInfo = new dojo.data.ItemFileWriteStore({ data: { identifier: 'cacheID', label: 'cacheID', items: rawdataCacheItems } }); I'd like to make a XHR request to get a new JSON string of the data to render. What I can't work out is how to change the data in the "items" held by ItemFileWriteStore. Can anyone point me in the correct direction? Thanks Jeff Porter

    Read the article

  • selectors-api for data attributes

    - by MJ
    In HTML5, CSS selectors seem to operate well with data-* attributes. For example: <style> div[data-foo='bar'] { background:#eee; } </style> <div data-foo='bar'>colored</div> <div>not colored</div> will properly style the first . But, attempts to select such elements using the selectors-api fail. Examples: var foos = document.querySelectorAll("div[data-foo]='bar'"); or var foos = document.querySelectorAll("div data-foo='bar'"); in Chrome and Safari, this produces a cryptic error: SYNTAX_ERR: DOM Exception 12 Any thoughts on how to use the selectors-api to properly select elements on the basis of data-* attributes?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >