Search Results

Search found 2074 results on 83 pages for 'arbitrary precision'.

Page 42/83 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • NLP - Queries using semantic wildcards in full text searching, maybe with Lucene?

    - by Zsolt
    Let's say I have a big corpus (for example in english or an arbitrary language), and I want to perform some semantic search on it. For example I have the query: "Be careful: [art] armada of [sg] is coming to [do sg]!" And the corpus contains the following sentence: "Be careful: an armada of alien ships is coming to destroy our planet!" It can be seen that my query string could contain "semantic placeholders", such as: [art] - some placeholder for articles (for example a / an in English) [sg], [do sg] - some placeholders for NPs and VPs (subjects and predicates) I would like to develop a library which would be capable to handle these queries efficiently. I suspect that some kind of POS-tagging would be necessary for parsing the text, but because I don't want to fully reimplement an already existing full-text search engine to make it work, I'm considering that how could I integrate this behaviour into a search engine like Lucene? I know there are SpanQueries which could behave similarly in some cases, but as I can see, Lucene doesn't do any semantic stuff with stored texts. It is possible to implement a behavior like this? Or do I have to write an own search engine?

    Read the article

  • Keystone Correction using 3D-Points of Kinect

    - by philllies
    With XNA, I am displaying a simple rectangle which is projected onto the floor. The projector can be placed at an arbitrary position. Obviously, the projected rectangle gets distorted according to the projectors position and angle. A Kinect scans the floor looking for the four corners. Now my goal is to transform the original rectangle such that the projection is no longer distorted by basically pre-warping the rectangle. My first approach was to do everything in 2D: First compute a perspective transformation (using OpenCV's warpPerspective()) from the scanned points to the internal rectangle's points und apply the inverse to the rectangle. This seemed to work but was too slow as it couldn't be rendered on the GPU. The second approach was to do everything in 3D in order to use XNA's rendering features. First, I would display a plane, scan its corners with Kinect and map the received 3D-Points to the original plane. Theoretically, I could apply the inverse of the perspective transformation to the plane, as I did in the 2D-approach. However, in since XNA works with a view and projection matrix, I can't just call a function such as warpPerspective() and get the desired result. I would need to compute the new parameters for the camera's view and projection matrix. Question: Is it possible to compute these parameters and split them into two matrices (view and projection)? If not, is there another approach I could use?

    Read the article

  • double.NaN Equality in MS Test

    - by RichK
    Why am I getting this result? [TestMethod] public void nan_test() { Assert.AreEqual(1, double.NaN, 1E-1); <-- Passes Assert.AreEqual(1, double.NaN); <-- Fails } What difference does the delta have in asserting NaN equals a number? Surely it should always return false. I am aware of IsNaN, but that's not useful here (see below). Background: I have a function returning NaN (erroneously) , it was meant to be a real number but the test still passed. I'm using the delta because it's double precision equality, the original test used 1E-9.

    Read the article

  • Adding text over existing PDFs using reportlab

    - by Shane
    I'm interested in filling out existing PDF forms programatically. All I really need to do is pull information from user input and then place the appropriate text over an existing PDF in the appropriate locations. I can already do this with reportlab by feeding the same sheet of paper into a printer, twice, but this just really rubs me the wrong way. I'm tempted to just personally reverse engineer each existing PDF and draw every line and character myself before adding the user-inputted text, but I wanted to check to see if there was an easy way to take an existing PDF and set it as a background for some extra text. I'd really prefer to use python as it's the only language I feel comfortable with. I also realize that I could just scan the document itself and use the resulting raster image as a background, but I would prefer the precision of vector graphics. It seems like ReportLab has a commercial product with this functionality, and the specific function I'm looking for is in it (copyPages) - but it seems like overkill to pay for a 4 figure product for a single, simple function for a nonprofit use.

    Read the article

  • Java type for date/time when using Oracle Date with Hibernate

    - by Marcus
    We have a Oracle Date column. At first in our Java/Hibernate class we were using java.sql.Date. This worked but it didn't seem to store any time information in the database when we save so I changed the Java data type to Timestamp. Now we get this error: springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.dao.an notation.PersistenceExceptionTranslationPostProcessor#0' defined in class path resource [margin-service-domain -config.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreatio nException: Error creating bean with name 'sessionFactory' defined in class path resource [m-service-doma in-config.xml]: Invocation of init method failed; nested exception is org.hibernate.HibernateException: Wrong column type: CREATE_TS, expected: timestamp Any ideas on how to map an Oracle Date while retaining the time portion? Update: I can get it to work if I use the Oracle Timestamp data type but I don't want that level of precision ideally. Just want the basic Oracle Date.

    Read the article

  • Enum.HasFlag

    - by Scott Dorman
    An enumerated type, also called an enumeration (or just an enum for short), is simply a way to create a numeric type restricted to a predetermined set of valid values with meaningful names for those values. While most enumerations represent discrete values, or well-known combinations of those values, sometimes you want to combine values in an arbitrary fashion. These enumerations are known as flags enumerations because the values represent flags which can be set or unset. To combine multiple enumeration values, you use the logical OR operator. For example, consider the following: public enum FileAccess { None = 0, Read = 1, Write = 2, }   class Program { static void Main(string[] args) { FileAccess access = FileAccess.Read | FileAccess.Write; Console.WriteLine(access); } } The output of this simple console application is: The value 3 is the numeric value associated with the combination of FileAccess.Read and FileAccess.Write. Clearly, this isn’t the best representation. What you really want is for the output to look like: To achieve this result, you simply add the Flags attribute to the enumeration. The Flags attribute changes how the string representation of the enumeration value is displayed when using the ToString() method. Although the .NET Framework does not require it, enumerations that will be used to represent flags should be decorated with the Flags attribute since it provides a clear indication of intent. One “problem” with Flags enumerations is determining when a particular flag is set. The code to do this isn’t particularly difficult, but unless you use it regularly it can be easy to forget. To test if the access variable has the FileAccess.Read flag set, you would use the following code: (access & FileAccess.Read) == FileAccess.Read Starting with .NET 4, a HasFlag static method has been added to the Enum class which allows you to easily perform these tests: access.HasFlag(FileAccess.Read) This method follows one of the “themes” for the .NET Framework 4, which is to simplify and reduce the amount of boilerplate code like this you must write. Technorati Tags: .NET,C# 4

    Read the article

  • Extreme Optimization – Curves (Function Mapping) Part 1

    - by JoshReuben
    Overview ·        a curve is a functional map relationship between two factors (i.e. a function - However, the word function is a reserved word). ·        You can use the EO API to create common types of functions, find zeroes and calculate derivatives - currently supports constants, lines, quadratic curves, polynomials and Chebyshev approximations. ·        A function basis is a set of functions that can be combined to form a particular class of functions.   The Curve class ·        the abstract base class from which all other curve classes are derived – it provides the following methods: ·        ValueAt(Double) - evaluates the curve at a specific point. ·        SlopeAt(Double) - evaluates the derivative ·        Integral(Double, Double) - evaluates the definite integral over a specified interval. ·        TangentAt(Double) - returns a Line curve that is the tangent to the curve at a specific point. ·        FindRoots() - attempts to find all the roots or zeroes of the curve. ·        A particular type of curve is defined by a Parameters property, of type ParameterCollection   The GeneralCurve class ·        defines a curve whose value and, optionally, derivative and integrals, are calculated using arbitrary methods. A general curve has no parameters. ·        Constructor params:  RealFunction delegates – 1 for the function, and optionally another 2 for the derivative and integral ·        If no derivative  or integral function is supplied, they are calculated via the NumericalDifferentiation  and AdaptiveIntegrator classes in the Extreme.Mathematics.Calculus namespace. // the function is 1/(1+x^2) private double f(double x) {     return 1 / (1 + x*x); }   // Its derivative is -2x/(1+x^2)^2 private double df(double x) {     double y = 1 + x*x;     return -2*x* / (y*y); }   // The integral of f is Arctan(x), which is available from the Math class. var c1 = new GeneralCurve (new RealFunction(f), new RealFunction(df), new RealFunction(System.Math.Atan)); // Find the tangent to this curve at x=1 (the Line class is derived from Curve) Line l1 = c1.TangentAt(1);

    Read the article

  • How to determine the modulus of a Float in Ada 95

    - by mat_geek
    I need to determine the amount left of a time cycle. To do that in C I would use fmod. But in ada I can find no reference to a similar function. It needs to be accurate and it needs to return a float for precision. So how do I determine the modulus of a Float in Ada 95? elapsed := time_taken mod 10.348; left := 10.348 - elapsed; delay Duration(left);

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • FileOpenPicker/FileSavePicker doesn't allow *.* wildcard file associations

    - by mbrit
    On Twitter, Matthias Jauernig commented that the FileOpenPicker and FileSavePicker doesn't allow *.* wildcard file associations. I was relaxed about this and wrote back that it was related to sandboxing implying it was a "good thing", however as Matthias commented back, perhaps it's not.In Metro-style the sandboxing works that if something gives you a file (e.g. the picker, or a share operation), you can access it regardless of where on the system. If you find the file yourself, you have to declare the type.The reason why I think it's related to sandboxing is because if you work with files programmatically you have to be explicit about the file types. This is to stop malware that you think is only interested in - say .PDF files, scanning and uploading any .EML files that it can find on the machine. It follows then on the pickers that restriction would continue. It allow's the retail store team to validate that an app is likely to behave itself. If it's an app that works with images, locking down the picker so that it can only access image file types makes sense.However Matthias mentioned that he has an app that should allow files of any arbitrary file. That fits more into the "if the user selects it, it must be OK" camp than the "programmatic scanning" camp. So now I'm left wondering why the picker doesn't allow any type to be selected.I think then maybe the decision comes down to simplicity. A lot of the decisions in Metro-style design relate to ideas about "zero intimidation". Allow the user to select any file is too much like Old Windows, and not enough like Reimagined Windows. What happens in Matthias's app if the user selects Explorer.exe as the file he or she wants to work with? I guess it's fine if you expect your user to know what they're doing (Old Windows), but not so fine if you're expecting a three year old to work with it (Reimagined Windows).

    Read the article

  • MSSQL DATEDIFF accuracy

    - by jomi
    Hello, I have to store some intervals in mssql db. I'm aware that the datetime's accuracy is approx. 3.3ms (can only end 0, 3 and 7). But when I calculate intervals between datetimes I see that the result can only end with 0, 3 and 6. So the more intervals I sum up the more precision I loose. Is it possible to get an accurate DATEDIFF in milliseconds ? declare @StartDate datetime declare @EndDate datetime set @StartDate='2010-04-01 12:00:00.000' set @EndDate='2010-04-01 12:00:00.007' SELECT DATEDIFF(millisecond, @StartDate, @EndDate),@EndDate-@StartDate, @StartDate, @EndDate I would like to see 7 ad not 6. (And it should be as fast as possible) Thanks,

    Read the article

  • Rails: creating a custom data type / creating a shorthand

    - by Shyam
    Hi, I am wondering how I could create a custom data type to use within the rake migration file. Example: if you would be creating a model, inside the migration file you can add columns. It could look like this: def self.up create_table :products do |t| t.column :name, :string t.timestamps end end I would like to know how to create something like this: t.column :name, :my_custom_data_type The reason for this to create for example a "currency" type, which is nothing more than a decimal with a precision of 8 and a scale of 2. Since I use only MySQL, the solution for this database is sufficient enough. Thank you for your feedback and comments!

    Read the article

  • links for 2010-03-31

    - by Bob Rhubart
    Andy Mulholland: Rethinking the narrow and deep expertise model "We increasingly realise that we have to read requirements in a more open way to decide what techniques can be used, what business experience can be added, etc, so the whole idea of encouraging ‘cross’ discipline understanding seems to look increasingly necessary as we look at how technology touches every part of business, and/or any other aspect of life. It is time to rethink the narrow and deep expertise model and consider T-shaped approaches where the depth is complimented by the width to understand how it might be used and how it fits with other capabilities and disciplines too." -- Andy Mulholland (tags: enterprisearchitecture) @vambenepe: Smoothing a discrete world "For the short term (until we sell one) there are three cars in my household. A manual transmission, an automatic and a CVT (continuous variable transmission). This makes me uniquely qualified to write about Cloud Computing." -- William Vambenepe (tags: otn oracle cloud) @fteter: The Price of Progress "I wonder about the price of progress on the business world. Do some of us get attached to old business models or software applications? Do we resist change for the better for emotional reasons? Are we sometimes impediments to progress just because we don't want things to change?" -- Oracle ACE Director Floyd Teter (tags: otn oracle oracleace progress innovation) Pat Shepherd: Enterprise Architecture should not be Arbitrary "If done properly the Business, Application and Information architectures are nailed down BEFORE any technological direction (SOA or otherwise) is set. Those 3 layers and Governance (people and processes), IMHO, are layers that should not vary much as they have everything to do with understanding the business -- from which technological conclusions can later be drawn." - Pat Shepherd, responding to a post by Jordan Braunstein. (tags: oracle otn enterprisearchitecture soa)

    Read the article

  • Idea of an algorithm to detect a website's navigation structure?

    - by Uwe Keim
    Currently I am in the process of developing an importer of any existing, arbitrary (static) HTML website into the upcoming release of our CMS. While the downloading the files is solved successfully, I'm pulling my hair off when it comes to detect a site structure (pages and subpages) purely from the HTML files, without the user specifying additional hints. Basically I want to get a tree like: + Root page 1 + Child page 1 + Child page 2 + Child child page1 + Child page 3 + Root page 2 + Child page 4 + Root page 3 + ... I.e. I want to be able to detect the menu structure from the links inside the pages. This has not to be 100% accurate, but at least I want to achieve more than just a flat list. I thought of looking at multiple pages to see similar areas and identify these as menu areas and parse the links there, but after all I'm not that satisfied with this idea. My question: Can you imagine any algorithm when it comes to detecting such a structure? Update 1: What I'm looking for is not a web spider, but an algorithm do create a logical tree of the relationship of the pages to be able to create pages and subpages inside my CMS when importing them. Update 2: As of Robert's suggestion I'll solve this by starting at the root page, and then simply parse links as you go and treat every link inside a page simply as a child page. Probably I'll recurse not in a deep-first manner but rather in a breadth-first manner to get a more balanced navigation structure.

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Significant figures in the decimal module

    - by Jason Baker
    So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly: from decimal import Decimal >>> Decimal('1.0') + Decimal('2.0') Decimal("3.0") But this doesn't: >>> Decimal('1.00') / Decimal('3.00') Decimal("0.3333333333333333333333333333") So two questions: Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math? Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.

    Read the article

  • Is it better to hard code data or find an algorithm?

    - by OghmaOsiris
    I've been working on a boardgame that has a hex grid as the board (the upper right grid in the image below) Since the board will never change and the spaces on the board will always be linked to the same other spaces around it, should I just hard code every space with the values that I need? Or should I use various algorithms to calculate links and traversals? To be more specific, my board game is a 4 player game where each player has a 5x5x5x5x5x5 hex grid (again, the upper right grid in th eimage above). The object is to get from the bottom of the grid to the top, with various obstacles in the way, and each players being able to attack eachother from the edge of their grid onto other players based on a range multiplier. Since the players grid will never change and the distance of any arbitrary space from the edge of the grid will always be the same, should I just hard code this number into each of the spaces, or should I still use a breadth first search algorithm when players are attacking? The only con I can think of for hard coding everything is that I'm going to code 9+ 2(5+6+7+8) = 61 individual cells. Is there anything else that I'm missing that I should consider using more complex algorithms?

    Read the article

  • Can you get access to the NumberFormatter used by ICU MessageFormat

    - by Travis
    This may be a niche question but I'm working with ICU to format currency strings. I've bumped into a situation that I don't quite understand. When using the MesssageFormat class, is it possible to get access to the NumberFormat object it uses to format currency strings. When you create a NumberFormat instance yourself, you can specify attributes like precision and rounding used when creating currency strings. I have an issue where for the South Korean locale ("ko_KR"), the MessageFormat class seems to create currency strings w/ rounding (100.50 - ?100). In areas where I use NumberFormat directly, I set setMaximumFractionDigits and setMinimumFractionDigits to 2 but I can't seem to set this in the MessageFormat. Any ideas?

    Read the article

  • In OpenGL vertex shader, gl_Position doesn't get homogenized..

    - by KJ
    Hi everyone, I was expecting gl_Position to automatically get homogenized (divided by w), but it seems not working.. Why do the followings make different results? 1) void main() { vec4 p; ... omitted ... gl_Position = projectionMatrix * p; } 2) ... same as above ... p = projectionMatrix * p; gl_Position = p / p.w; I think the two are supposed to generate the same results, but it seems it's not the case. 1 doesn't work while 2 is working as expected.. Could it possibly be a precision problem? Am I missing something? This is driving me almost crazy.. helps needed. Many thanks in advance!

    Read the article

  • C++ long long manipulation

    - by Krakkos
    Given 2 32bit ints iMSB and iLSB int iMSB = 12345678; // Most Significant Bits of file size in Bytes int iLSB = 87654321; // Least Significant Bits of file size in Bytes the long long form would be... // Always positive so use 31 bts long long full_size = ((long long)iMSB << 31); full_size += (long long)(iLSB); Now.. I don't need that much precision (that exact number of bytes), so, how can I convert the file size to MiBytes to 3 decimal places and convert to a string... tried this... long double file_size_megs = file_size_bytes / (1024 * 1024); char strNumber[20]; sprintf(strNumber, "%ld", file_size_megs); ... but dosen't seem to work. i.e. 1234567899878Bytes = 1177375.698MiB ??

    Read the article

  • Is there a literal notation for decimal in IronPython?

    - by jeroenh
    suppose I have the following IronPython script: def Calculate(input): return input * 1.21 When called from C# with a decimal, this function returns a double: var python = Python.CreateRuntime(); dynamic engine = python.UseFile("mypythonscript.py") decimal input = 100M; // input is of type Decimal // next line throws RuntimeBinderException: // "cannot implicitly convert double to decimal" decimal result = engine.Calculate(input); I seem to have two options: First, I could cast at the C# side: seems like a no-go as I might loose precision. decimal result = (decimal)engine.Calculate(input); Second option is to use System.Decimal in the python script: works, but pollutes the script, which should be understandable for my users... import clr import System def CalculateVAT(amount): return amount * System.Decimal(1.21) Is there a way to tell the DLR that the number 1.21 should be interpreted as a Decimal, much like I would use the '1.21M' notation in C#?

    Read the article

  • Save matrix of double values in OpenCV

    - by Christian
    I have an OpenCV matrix of double (CV_32F) values. I'd like to save it to the disk. I know, I could convert it to an 1-Channel 8-bit IplImage and save it. But that way, I loose precision. Is there a way to save it directly in the 32-bit format, without having to convert it first? It also would be nice, if the resulting file would have an image format, so I can view the result as an image.

    Read the article

  • GWT's JSONParser producing incorrect values for numbers.

    - by WesleyJohnson
    I'm work with GWT and parsing a JSON result from an ASP.NET webservice method that returns a DataTable. I can parse the result into a JSONvalue/JSONObject just fine. The issue I'm having is that one my columns in a DECIMAL(20, 0) and the values that are getting parsed into JSON aren't exact. To demonstrate w/o the need for a WS call, in GWT I threw this together: String jsonString = "{value:4768428229311981600}"; JSONObject jsonObject = JSONParser.parse( jsonString ).isObject(); Window.alert( jsonObject.toString() ); This in turn alerts: {"value":4768428229311982000} I'm under the understanding that GWT's JSONParser is just using eval() to do the parsing, so is this some sort of number/precision issue with JavaScript that I've never been aware of. I'll admit I don't work with numbers that much in JavaScript and I might be able to work around this by changing the .NET WebService to return this column as string, but I'd really rather not do that. Thanks for any help.

    Read the article

  • Connecting a Samsung Galaxy S3 in Ubuntu 13.04

    - by Squishy
    In 13.04, whenever I connect an Android device, one of three things happens: 1 . It mounts successfully (maybe once out of 3 attempts) 2 . It fails to mount with the following error message: Oops! Something went wrong. Unhandled error message: Unable to open MTP device 3 . This one occasionally happens: Unhandled error message: No such interface `org.gtk.vfs.Mount' on object at path /org/gtk/vfs/mount/1 Regardless of activity (even when successfully mounted) it will continuously spam the following error message: Unable to mount SAMSUNG_Android Unable to open MTP Device '[usb:003,00x]' where x seems to be an arbitrary number below 10 and continues counting up with each new error message until the device is unplugged. I've also just noticed that even if it mounts successfully, it unmounts after about 30 seconds and starts spamming the error message above. The Android device is unlocked, always on and fully charged. ADB seems to function normally. Any suggestions? Further info: this happens on both a stock Samsung S3 and an Xperia Arc S running a custom AOSP based ROM. I've also tried the steps outlined in this Stack Overflow answer, but the problem persists. UPDATE: After doing a dist-upgrade (May 8th 2013), the Xperia Arc S on AOSP ROM now mounts and behaves normally. The S3, however, still behaves as described above. UPDATE: After careful observation, ABD does not, in fact, behave normally. If the error message above appears while sending an app to the device, the attempt is aborted with an error message saying that the device is unavailable.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >