Search Results

Search found 59256 results on 2371 pages for 'data compression'.

Page 104/2371 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Google I/O 2012 - Crunching Big Data with BigQuery

    Google I/O 2012 - Crunching Big Data with BigQuery Jordan Tigani, Ryan Boyd Google BigQuery is a data analysis tool born from Google internal technologies. It enables developers to analyze terabyte data sets in seconds using a RESTful API. This session will dive into best practices for getting fast answers to business questions. We'll provide insight into how we process queries under the hood and how to construct SQL queries for complex analysis. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1 0 ratings Time: 01:03:04 More in Science & Technology

    Read the article

  • ODAC 11.2 Release 3(11.2.0.2.1)?????:64bit?ODP.NET?TimesTen????

    - by Yusuke.Yamamoto
    ODAC 11.2 Release 3(11.2.0.2.1) ??????????? Oracle Data Provider for .NET(ODP.NET) ?? Oracle Providers for ASP.NET ?64bit??????????????? ????????·????????? Oracle TimesTen In-Memory Database ??????????????? Oracle Data Access Components(ODAC) ?????? Oracle Data Access Components(ODAC) ?????? ??? .NET ????????? .NET ?????????????????? Oracle Database ????????? ???????/???1????!.NET + Oracle Database 11g ???????????? ?????????? .NET|???????????

    Read the article

  • Webcast: AutoInvoice Overview & Data Flow

    - by Annemarie Provisero-Oracle
    Webcast: AutoInvoice Overview & Data Flow Date: June 4, 2014 at 11:00 am ET, 9:00 am MT, 4:00 pm GMT, 8:30 pm IST This one-hour session is part one of a three part series on AutoInvoice and is recommended for technical and functional users who would like a better understanding of what AutoInvoice does, required setups and how data flows through the process. We will also cover diagnostic scripts used in with AutoInvoice. Topics will include: Why Using AutoInvoice? AutoInvoice Setups Data flow Diagnostic tools Details & Registration: Doc ID 1671931.1

    Read the article

  • BigQuery: Simple example of a data collection and analysis pipeline + Your questions

    BigQuery: Simple example of a data collection and analysis pipeline + Your questions Join Michael Manoochehri and Ryan Boyd live to talk about Google BigQuery. We'll give an overview of how we're using our cars, phones, App Engine and BigQuery to collect and analyze data. We'll be discussing our trusted tester feature which allows analyzing data from the App Engine datastore. We'll also review some of the more interesting questions from Stack Overflow and take questions via Google Moderator. From: GoogleDevelopers Views: 250 16 ratings Time: 26:53 More in Science & Technology

    Read the article

  • Data Conversion in SQL Server

    Most of the time, you do not have to worry about implicit conversion in SQL expressions, or when assigning a value to a column. Just occasionally, though, you'll find that data gets truncated, queries run slowly, or comparisons just seem plain wrong. Robert Sheldon explains why you sometimes need to be very careful if you mix data types when manipulating values. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Master Data Management - The Trend Towards Multi-Domain and Other Realities

    - by Mala Narasimharajan
    In my quest to keep my fingers on the pulse of MDM, I recently found a pretty interesting article.  The article was published in Information Week and provides some interesting statistics from a recent survey conducted by the analyst firm, The Information Difference.  Let's take a look: Of the 130 organizations surveyed, 53% have live operational MDM implementations 81% of those with live operational MDM implementations report broad success - a huge improvement over 2011's 54% 64% developed a business case prior to their MDM deployment, while a daring 32% went ahead without a business case.    The article goes on to talk about the shift in vendors from focusing on customer data and product information management to one that is oriented around multi-domain master data management as well as other realities around MDM.  Take a look at the article. For more information on Oracle's master data management suite, click here. 

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Exposing Data from Entity Framework

    To continue our series I wanted to look next at how to expose your data from the server side of your application.  The interesting data in your business applications come from a wide variety of data sources.  From a SQL Database, from Oracle DB, from Sql Azure, from Sharepoint, from a mainframe and you have likely already chosen a datamodel such as NHibernate, Linq2Sql, Entity Framework, Stored Proc, a service.   The goal of RIA Service in this release is to make it easy to...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Failure to download extra data files

    - by armanke13
    After fresh install 12.04 I && updating apt and system restart, I always get this annoying message after reboot : Failure to download extra data files The following packages requested additional data downloads after package installation, but the data could not be downloaded or could not be processed. ttf-mscorefonts-installer The download will be attempted again later, or you can try the download again now. Running this command requires an active Internet connection. But if I run attempt now, it shows flashing terminal window and like nothing happen. It'll happen again when I restart system. I found someone have this problem too, but he haven't replied yet. I'm a newbie here, please help, thanks ^^

    Read the article

  • New eBook: In-Memory Data Grids for Dummies

    - by jeckels
    We've just released a new eBook In-Memory Data Grids for Dummies. This is a fantastic resource if you're looking to explain in-memory data grids to colleagues, convince your boss of their value, or even discover some new use cases for your existing investment. In true "Dummies" style, this eBook will walk you through the basics tenets of in-memory data grids, their common use cases, where IMDGs sit in your architecture, and some key considerations when looking to implement them. While the title may say "Dummies," we know you'll find some useful overview and technical information in the resource. It's published by us on the Coherence team in partnership with Wiley (the "Dummies" company), but it's not only about Coherence or Oracle. In fact, we took pains to make this book fairly neutral to give you the best information, not a product pitch. Happy reading! Download the eBook now 

    Read the article

  • Importing Data From Excel Using SSIS - Part 1

    Recently while working on a project to import data from an Excel worksheet using SSIS, I realized that sometimes the SSIS Package failed even though when there were no changes in the structure/schema of the Excel worksheet. I investigated it and I noticed that the SSIS Package succeeded for some set of files, but for others it failed. I found that the structure/schema of the worksheet from both these sets of Excel files were the same, the data was the only difference. How come just changing the data can make an SSIS Package fail? What actually causes this failure? What can we do to fix it?

    Read the article

  • I need to get past my permissions to recover data

    - by adsmz
    Due to some mishaps, I am unable to boot into Kubuntu at all. However, my data is still on the hard drive. I managed to get one of the other two computers to which I have access to read the disk by booting into a liveCD session of kubuntu. The only storage medium to which I have access is a 30 GB data stick. Here's where the trouble starts: In music alone, I have to back up about 60 GB. Obviously this is going to have to be split into chunks and moved over to the second spare PC until I can reinstall Kubuntu on my laptop. All of the data that needs backed up is behind a permissions wall, so while I can view it, I can't interact with it directly. I know copying and moving through the terminal can get around this with sudo cp or sudo mv, but is there a way to first compress multiple folders in a single archive, then move it? (While we're on the subject, what compression method would be best for large volumes of music in MP3, WAV, and OGG format?)

    Read the article

  • Compression for custom mime type works on IIS 7.5, but not IIS 7.0?

    - by Doogal
    I've managed to set up compression for a custom mime type on IIS 7.5 with no problem. Add the mime type to IIS then add it to the httpCompression element in applicationHost.config. But when I do the same thing on IIS 7, the particular mime type is never compressed. This isn't a problem with compression in general since other mime types are compressed correctly. As far as I can tell, IIS 7 and IIS7.5 are configured in exactly the same way. Does IIS7 behave differently and do I need to do something else to get it working? I've setup failed request tracing and get a NO_MATCHING_CONTENT_TYPE error during compression but I can't figure out what else I need to do to tell IIS about my mime type

    Read the article

  • AJAX Return Problem from data sent via jQuery.ajax

    - by Anthony Garand
    I am trying to receive a json object back from php after sending data to the php file from the js file. All I get is undefined. Here are the contents of the php and js file. data.php <?php $action = $_GET['user']; $data = array( "first_name" = "Anthony", "last_name" = "Garand", "email" = "[email protected]", "password" = "changeme"); switch ($action) { case '[email protected]': echo $_GET['callback'] . '('. json_encode($data) . ');'; break; } ? core.js $(document).ready(function(){ $.ajax({ url: "data.php", data: {"user":"[email protected]"}, context: document.body, data: "jsonp", success: function(data){renderData(data);} }); }); function renderData(data) { document.write(data.first_name); }

    Read the article

  • Entity Framework 4 POCO entities in separate assembly, Dynamic Data Website?

    - by steve.macdonald
    Basically I want to use a dynamic data website to maintain data in an EF4 model where the entities are in their own assembly. Model and context are in another assembly. I tried this http://stackoverflow.com/questions/2282916/entity-framework-4-self-tracking-entities-asp-net-dynamic-data-error but get an "ambiguous match" error from reflection: System.Reflection.AmbiguousMatchException was unhandled by user code Message=Ambiguous match found. Source=mscorlib StackTrace: at System.RuntimeType.GetPropertyImpl(String name, BindingFlags bindingAttr, Binder binder, Type returnType, Type[] types, ParameterModifier[] modifiers) at System.Type.GetProperty(String name) at System.Web.DynamicData.ModelProviders.EFTableProvider..ctor(EFDataModelProvider dataModel, EntitySet entitySet, EntityType entityType, Type entityClrType, Type parentEntityClrType, Type rootEntityClrType, String name) at System.Web.DynamicData.ModelProviders.EFDataModelProvider.CreateTableProvider(EntitySet entitySet, EntityType entityType) at System.Web.DynamicData.ModelProviders.EFDataModelProvider..ctor(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.ModelProviders.SchemaCreator.CreateDataModel(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.MetaModel.RegisterContext(Func`1 contextFactory, ContextConfiguration configuration) at WebApplication1.Global.RegisterRoutes(RouteCollection routes) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 42 at WebApplication1.Global.Application_Start(Object sender, EventArgs e) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 78 InnerException:

    Read the article

  • Properties in partial class not appearing in Data Sources window!

    - by Tim Murphy
    Entity Framework has created the required partial classes. I can add these partial classes to the Data Sources window and the properties display as expected. However, if I extend any of the classes in a separate source file these properties do not appear in the Data Sources window even after a build and refresh. All properties in partial classes across source files work as expected in the Data Sources window except when the partial class has been created with EF. EDIT: After removing the offending table for edm designer, adding back in it all works are expected. Hardly a long term solution. Anyone else come across a similar problem?

    Read the article

  • Data validation: fail fast, fail early vs. complete validation

    - by Vivin Paliath
    Regarding data validation, I've heard that the options are to "fail fast, fail early" or "complete validation". The first approach fails on the very first validation error, whereas the second one builds up a list of failures and presents it. I'm wondering about this in the context of both server-side and client-side data validation. Which method is appropriate in what context, and why? My personal preference for data-validation on the client-side is the second method which informs the user of all failing constraints. I'm not informed enough to have an opinion about the server-side, although I would imagine it depends on the business logic involved.

    Read the article

  • How can I turn on DynamicCompression feature of IIS programmatically?

    - by LockeVN
    I'm making an installer program for my web application. My web application uses CSS and JS heavily, so I want to enable both Static and Dynamic HttpCompression for IIS7/7.5. It needs 2 steps: I can modified the web.config, put <httpcompression> tag, it's ok. DynamicContentCompression must be turned on in Windows Feature to make httpCompression work. Static HttpCompression is enable by default in IIS7 and IIS7.5, but Dynamic HttpCompression is not enable by default (although it's available). I can do manually by turn on: Start/ControlPanel/ProgramsAndFeatures/TurnWindowsFeatures on or Off/IIS/WWW Service/Performance features/Dynamic Content Compression, but How can I programmatically turn it on that Windows Feature? I can use PowerShell, C# in my installer. Any idea how I might be able to do this? Thanks.

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • Compressing three individual jpeg pics containing temporal redundancy?

    - by michael
    I am interfacing and embedded device with a camera module that returns a single jpeg compressed frame each time I trigger it. I would like to take three successive shots (approx 1 frame per 1/4 second) and further compress the images into a single file. The assumption here is that there is a lot of temporal redundancy, therefore lots of room for more compression across the three frames (compared to sending three separate jpeg images). I will be implementing the solution on an embedded device in C without any libraries and no OS. The camera will be taking pics in an area with very little movement (no visitors or screens in the background, maybe a tree with swaying branches), so I think my assumption about redundancy is pretty solid. When the file is finally viewed on a pc/mac, I don't mind having to write something to extract the three frames (so it can be a nonstandard cluge) So I guess the actual question is: What is the best way to compress these three images together given the fact that they are already in JPEG format (it is a possibly to convert back to a raw image, but if i dont have too...)

    Read the article

  • Best method to compress JSON string in term of performance and compress radio

    - by Eric Yin
    For a JSON string, contains all kinds of settings, numbers, string etc. Total JSON string fairly fall into 10k~50K range. I want to compress it before save to database. So I wonder which compress method should I choose, I am using c# 4, I know I can choose gzip and deflate but the compression radio is not good (although speed is good). More specific, compress can be a little slow (since only once) but should be small. Decompress should be lighting fast since decompress happens lots. Please give some advice.

    Read the article

  • Algorithms to find longest common prefix in a sliding window.

    - by nn
    Hi, I have written a Lempel Ziv compressor and decompressor. I am seeking to improve the time to search the dictionary for a phrase. I have considered K-M-P and Boyer-Moore, but I think an algorithm that adapts to changes in the dictionary would be faster. I've been reading that binary search trees (AVL or with splays) improve the performance of compression time considerably. What I fail to understand is how to bootstrap the binary search tree and insert/remove data. I'm not actually quite sure the significance of each node in the binary search. I am searching for phrases so will each character be considered a node? Also how and what is inserted/removed from the search tree as new data enters the dictionary and old data is removed? The binary search tree sounds like a good payoff since it can adapt to the dictionary, but I'm just not quite sure of how it's used.

    Read the article

  • How to unit test internals (organization) of a data structure?

    - by Herms
    I've started working on a little ruby project that will have sample implementations of a number of different data structures and algorithms. Right now it's just for me to refresh on stuff I haven't done for a while, but I'm hoping to have it set up kind of like Ruby Koans, with a bunch of unit tests written for the data structures but the implementations empty (with full implementations in another branch). It could then be used as a nice learning tool or code kata. However, I'm having trouble coming up with a good way to write the tests. I can't just test the public behavior as that won't necessarily tell me about the implementation, and that's kind of important here. For example, the public interfaces of a normal BST and a Red-Black tree would be the same, but the RB Tree has very specific data organization requirements. How would I test that?

    Read the article

  • R: How can I reorder the rows of a matrix, data.frame or vector according to another one.

    - by John
    test1 <- as.matrix(c(1, 2, 3, 4, 5)) row.names(test1) <- c("a", "b", "c", "d", "e") test2 <- as.matrix(c(6, 7, 8, 9, 10)) row.names(test2) <- c("e", "d", "c", "b", "a") test1 [,1] a 1 d 2 c 3 b 4 e 5 test2 [,1] e 6 d 7 c 8 b 9 a 10 How can I reorder test2 so that the rows are in the same order as test1? e.g: test2 [,1] a 10 d 7 c 8 b 9 e 6 I tried to use the reorder function with: reorder (test1, test2) but I could not figure out the correct syntax. I see that reorder takes a vector, and I'm here using a matrix. My real data has one character vector and another as a data.frame. I figured that the data structure would not matter too much for this example above, I just need help with the syntax and can adapt it to my real problem.

    Read the article

  • Compressing as GZip WCF requests (SOAP and REST)

    - by Joannes Vermorel
    I have a .NET 3.5 web app hosted on Windows Azure that exposes several WCF endpoints (both SOAP and REST). The endpoints typically receive 100x more data than they serve (lot of data is upload, much fewer is downloaded). Hence, I am willing to take advantage from HTTP GZip compression but not from the server viewpoint, but rather from the client viewpoint, sending compressed requests (returning compressed responses would be fine, but won't bring much gain anyway). Here is the little C# snippet used on the client side to activate WCF: var binding = new BasicHttpBinding(); var address = new EndpointAddress(endPoint); _factory = new ChannelFactory<IMyApi>(binding, address); _channel = _factory.CreateChannel(); Any idea how to adjust the behavior so that compressed HTTP requests can be made?

    Read the article

  • How does git save space and is fast at the same time?

    - by eSKay
    I just saw the first git tutorial at http://blip.tv/play/Aeu2CAI How does git store all the versions of all the files and still be more economical in space than subversion which saves only the latest version of the code? I know this can be done using compression but that would be at the cost of speed, but this also says that git is much faster (though where is gains the max is the fact that most of its operations are offline). So, my guess is that git compresses data extensively it is still faster because uncompression + work is still faster than network_fetch + work Am I correct? even close?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >