Search Results

Search found 61393 results on 2456 pages for 'data integration'.

Page 17/2456 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • At which point is a continuous integration server interesting?

    - by Cedric Martin
    I've been reading a bit about CI servers like Jenkins and I'm wondering: at which point is it useful? Because surely for a tiny project where you'd have only 5 classes and 10 unit tests, there's no real need. Here we've got about 1500 unit tests and they pass (on old Core 2 Duo workstations) in about 90 seconds (because they're really testing "units" and hence are very fast). The rule we have is that we cannot commit code when a test fail. So each developers launches all his tests to prevent regression. Obviously, because all the developers always launch all the test we catch errors due to conflicting changes as soon as one developer pulls the change of another (when any). It's still not very clear to me: should I set up a CI server like Jenkins? What would it bring? Is it just useful for the speed gain? (not an issue in our case) Is it useful because old builds can be recreated? (but we can do this to with Mercurial, by checking out old revs) Basically I understand it can be useful but I fail to see exactly why. Any explanation taking into account the points I raised above would be most welcome.

    Read the article

  • Summary of usage policies for website integration of various social media networks?

    - by Dallas
    To cut to the chase... I look at Twitter's usage policy and see limitations on what can and can't be done with their logo. I also see examples of websites that use icons that have been integrated with the look and feel of their own site. Given Twitter's policy, for example, it would appear that legal conversations/agreements would need to take place to do this, especially on a commercial site. I believe it is perfectly acceptable to have a plain text button that simply has the word "Tweet" on it, that has the same functionality. My question is if anyone can provide online (or other) references that attempt to summarize what can and can't be done when integrating various social networks into your own work? The answer I will mark as the correct one will be the one which provides the best resource(s) giving the best summaries of what can and can't be done with specific logos/icons, with a secondary factor being that a variety of social networking sites are addressed in your answer. Before people point to specific questions, I am looking for a well-rounded approach that considers a breadth of networks and considerations. Background: I would like to incorporate social media icons and functionality, but would like to consider what type of modifications can be done without needing to involve lawyers. For example, can I bring in a standard Facebook logo, but incorporate my site color into the logo? Would the answer differ if I maintained their color, but add in a few pixels of another color to transition? I am not saying I want to do this, but rather using it as an example.

    Read the article

  • Help with Perl persistent data storage using Data::Dumper

    - by stephenmm
    I have been trying to figure this out for way to long tonight. I have googled it to death and none of the examples or my hacks of the examples are getting it done. It seems like this should be pretty easy but I just cannot get it. Here is the code: #!/usr/bin/perl -w use strict; use Data::Dumper; my $complex_variable = {}; my $MEMORY = "$ENV{HOME}/data/memory-file"; $complex_variable->{ 'key' } = 'value'; $complex_variable->{ 'key1' } = 'value1'; $complex_variable->{ 'key2' } = 'value2'; $complex_variable->{ 'key3' } = 'value3'; print Dumper($complex_variable)."TEST001\n"; open M, ">$MEMORY" or die; print M Data::Dumper->Dump([$complex_variable], ['$complex_variable']); close M; $complex_variable = {}; print Dumper($complex_variable)."TEST002\n"; # Then later to restore the value, it's simply: do $MEMORY; #eval $MEMORY; print Dumper($complex_variable)."TEST003\n"; And here is my output: $VAR1 = { 'key2' => 'value2', 'key1' => 'value1', 'key3' => 'value3', 'key' => 'value' }; TEST001 $VAR1 = {}; TEST002 $VAR1 = {}; TEST003 Everything that I read says that the TEST003 output should look identical to the TEST001 output which is exactly what I am trying to achieve. What am I missing here? Should I be "do"ing differently or should I be "eval"ing instead and if so how? Thanks for any help...

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

  • How to properly set relationships in Core Data when using setValue and data already exists

    - by ern
    Let's say I have two objects: Articles and Categories. For the sake of this example all relevant categories have already been added to the data store. When looping through data that holds edits for articles, there is category relationship information that needs to be saved. I was planning on using the -setValue method in the Article class in order to set the relationships like so: - (void)setValue:(id)value forUndefinedKey:(NSString *)key { if([key isEqualToString:@"categories"]){ NSLog(@"trying to set categories..."); } } The problem is that value isn't a Category, it is just a string (or array of strings) holding the title of a category. I could certainly do a lookup within this method for each category and assign it, but that seems inefficient when processing a whole bunch of articles at once. Another option is to populate an array of all possible categories and just filter, but my question is where to store that array? Should it be a class method on Article? Is there a way to pass in additional data to the -setValue method? Is there another, better option for setting the relationship I'm not thinking of? Thanks for your help.

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • Simple ADF page using BAM Data Control

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Purpose : In this blog I will walk you through very simple steps to create an ADF page using BAM data control connection.Details : Create the projectOpen JDeveloper (make sure you have installed the SOA extension for JDev)Create new Application using "Generic Application" template.Click on "Next"Shuttle  "ADF Faces" to right pane for the project technology.Click "Finish"Create a BAM connectionIn the resource palette click on "Folder->New Connection -> BAM"Enter the connection name and click "Next"Enter Connection details Click on "Test connection" and "Finish"Create the BAM Data Control Open the IDE connection created in above step.Drag and drop "Employees" to "Data controls" palette.Select "Flat Query" and Click "Finish".Create the View Create a new JSF page.From Data control Panel drag and drop "Employees->Query->ADF Read Only table"Right click and Run the page.

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Data structure for bubble shooter game

    - by SundayMonday
    I'm starting to make a bubble shooter game for a mobile OS. Assume this is just the basic "three or more same-color bubbles that touch pop" and all bubbles that are separated from their group fall/pop. What data structures are common for storing the bubbles? I've considered using an undirected, connected graph where each node is a bubble. This seems like it could help answer the question "which bubbles (if any) should fall now?" after some arbitrary bubbles are popped and corresponding nodes are removed from the graph. I think the answer is all bubbles that were just disconnected from the graph should fall. However the graph approach might be overkill so I'm not sure. Another consideration for the data structure is collision detection. Perhaps being able to grab a list of neighboring bubbles in constant time for a particular "bubble slot" is useful. So the collision detection would be something like "moving bubble is closest to slot ij, neighbors of slot ij are bubbles a,b,c, moving bubble is sufficiently close to bubble b hence moving bubble should come to rest in slot ij". A game like this could be probably be made with a relatively crude grid structure as the primary data structure. However it seems like answering "which bubbles (if any) should fall now?" would be trickier with this data structure.

    Read the article

  • How to analyze data

    - by Subhash Dike
    We are working on an application that allows user to search/read some content in a particular domain. We wanted to add some capability in the app which can suggest user some content based on the usage pattern (analyze data based on frequency and relevance). Currently every time user search or read something we do store that information in backend database. We would like to use this data to present some additional content to user. Could someone explain what kind of tools will be required for such a job and any example? And what this concept is called, data analysis? data mining? business intelligence? or something else? Update: Sorry for being too broad, here is an example SQL Database (Just to give an idea, actual db is little different with normalization and stuff) Table: UserArticles Fields: UserName | ArticleId | ArticleTitle | DateVisited | ArticleCategory Table: CategoryArticles Fields: Category | Article Title | Author etc. One Category may have one more articles. One user may have read the same article multiple times (in this case we place additional entry in the user article table. Task: Use the information availabel in UserArticle table and rank categories in order which would be presented to user automatically in other part of application. Factors to be considered are frequency and recency. This might be possible through simple queries or may require specialized tools. Either way, the task is what mention above. I am not too sure which route to take, hence the question. Thoughts??

    Read the article

  • Can JSON be made easily and safely editable by the non-technical Excel crowd?

    - by glitch
    I'm looking for a data storage format that's very intuitive and easy to edit. It should be ideally targeted towards the same crowd as Excel. At the same time I would like the data structure to be a tree. Ideally this would be JSON, since it offers both the tree aspect and allows for more interesting constructs like arrays. That and parsing libraries for JSON are ubiquitous, so I don't have to reinvent the wheel. The problem is that, at least with a non-specialized text editor, JSON is a giant pain to edit for a non-technical user. I'm thinking along the lines of someone who might have used Excel in the past, but never a real text editor. Someone who might not be comfortable with the idea of preserving JSON syntax by hand. Are there data formats out there that would fit this profile? I'd very much prefer this to be a JSON actually, but then it would require a solid editing tool that would hide the underlying implementation from the user. Think Excel and how it abstracts CSV syntax from the user. The reason I'm looking for something like this is because the team has been working with pretty hierarchical data for a while now and we've hit the limits of how easy it is to represent in simple CSVs without having to create complex rules for how represent hierarchy semantics from each row. Any suggestions?

    Read the article

  • Data Synchronization in mobile apps - multiple devices, multiple users

    - by ProgrammerNewbie
    I'm looking into building my first mobile app. One of the core features of the application is that multiple devices/users will have access to the same data -- and all of them will have CRUD rights. I believe the architecture should involve a central server where all the data is stored. The devices will use an API to interact with the server to perform its data operations (e.g. adding a record, editing a record, deleting a record). I imagine a scenario where synchronizing the data will become a problem. Assume the application should work when it is not connected to the Internet, and thus cannot communicate with this central server. So: User A is offline and edits record #100 User B is offline and edits record #100 User C is offline and deletes record #100 User C goes online (presumably, record #100 should get deleted on the server) User A and B goes online, but the records they edited no longer exist All sorts of scenarios similar to the above can come up. How is this generally handled? I plan to use MySQL, but am wondering if it's not appropriate for such a problem.

    Read the article

  • Window out of the screen when maximized using WPF shell integration library

    - by Eduardo Molteni
    I'm using the WPF Shell Integration Library to create a custom chrome of my wpf app. All is good, but when maximizing the app, 6 or 7 pixels are out of the screen. This is the code I'm using: <Style TargetType="{x:Type local:MainWindow}"> <Setter Property="shell:WindowChrome.WindowChrome"> <Setter.Value> <shell:WindowChrome ResizeBorderThickness="6" CaptionHeight="10" CornerRadius="0" GlassFrameThickness="1"/> </Setter.Value> </Setter> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:MainWindow}"> <Grid> <Border BorderThickness="1" BorderBrush="#389FD1" Background="#389FD1"> <ContentPresenter Margin="0,22,0,0" Content="{TemplateBinding Content}"/> </Border> <StackPanel Orientation="Horizontal" HorizontalAlignment="Right" VerticalAlignment="Top" > <TextBlock Text="{Binding NombreUsuario}" Foreground="White" Margin="5,5,20,5" Opacity=".8" /> <Button Style="{StaticResource ImageButton}" Height="20" Width="20" Margin="0" Click="WindowMinimize" shell:WindowChrome.IsHitTestVisibleInChrome="True"> <Image Height="10" Width="10" Source="/Resources/Images/minimize.png" /> </Button> <Button Style="{StaticResource ImageButton}" Height="20" Width="20" Margin="0" Click="WindowMaximizeRestore" shell:WindowChrome.IsHitTestVisibleInChrome="True" > <Image Height="10" Width="10" Source="/Resources/Images/maximize.png" /> </Button> <Button Style="{StaticResource ImageButton}" Height="20" Width="20" Margin="0" Click="WindowClose" shell:WindowChrome.IsHitTestVisibleInChrome="True"> <Image Height="10" Width="10" Source="/Resources/Images/close.png" /> </Button> </StackPanel> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style>

    Read the article

  • How to ignore a test within the JUnit test method itself

    - by Benju
    We have a number of integration tests that fail when our staging server goes down for weekly maintenance. When the staging server is down we send a specific response that I could detect in my integration tests. When I get this response instead of failing the tests I'm wondering if it is possible to skip/ignore that test even though it has started running. This would keep our test reports a bit cleaner. Does anybody have suggestions?

    Read the article

  • Importing Excel data into SSIS 2008 using Data Conversion Transformation

    Despite its benefits, SQL Server Integration Services Import Export Wizard has a number of limitations, resulting in part from a new set of rules that eliminate implicit data type conversion mechanisms present in Data Transformation Services. This article discusses a method that addresses such limitations, focusing in particular on importing the content of Excel spreadsheets into SQL Server.

    Read the article

  • data source does not support server-side data paging uisng asp.net Csharp

    - by Aamir Hasan
    Yesterday some one mail me and ask about data source does not support server side data paging.So i write the the solution here please if you have got this problem read this article and see the example code this will help you a Lot.The only change you have to do is in the DataBind().Here you have used the SqlDataReader to read data retrieved from the database, but SqlDataReader is forward only. You can not traverse back and forth on it.So the solution for this is using DataAdapter and DataSet.So your function may change some what like this private void DataBind(){//for grid viewSqlCommand cmdO;string SQL = "select * from TABLE ";conn.Open();cmdO = new SqlCommand(SQL, conn);SqlDataAdapter da = new SqlDataAdapter(cmdO);DataSet ds = new DataSet();da.Fill(ds);GridView1.Visible = true;GridView1.DataSource = ds;GridView1.DataBind();ds.Dispose();da.Dispose();conn.Close();} This surely works. The reset of your code is fine. Enjoy coding.

    Read the article

  • Recommend .NET data access layer/middle tier

    - by Simon G
    Hi, I'm currently creating an MVC application that will likely to expand to include a silverlight, wpf and possible windows phone all using the same data. So I've created a class library to keep all my objects in and I've created the MVC app. My question is what would be the best way to access the data? Taking into account possible expansion in the future. Should I use web services/WCF? RIA Services? Remoting? Or something else. What have people used in the past and what do you recommend? Thanks

    Read the article

  • Is Master Data Management CRM's Secret Sauce?

    - by divya.malik
    This was the title of a recent blog entry by our colleagues in EMEA. Having a good master data management system enables organizations to get a unified, accurate and complete understanding of their customers. Gartner Group's John Radcliffe explains why MDM is destined to be at the heart of future CRM and social CRM projects. Experts are predicting big things for master data management (MDM) in the immediate future. While far from being a new kid on the block, its potential benefits at a time when organisations are drowning in data mean that it is in the right place at the right time. "MDM is not 'nice to have'," explains John Radcliffe, research vice president at Gartner. "If tackled in the right way it can provide near term business value that plays into an organisation's new focus on cost efficiencies, risk management and regulatory compliance, while supporting growth and future transformative strategies." The complete article can be found here.

    Read the article

  • T-SQL Tuesday #006: Tiger/Line Spatial Data

    - by Mike C
    This month’s T-SQL Tuesday post is about LOB data http://sqlblog.com/blogs/michael_coles/archive/2010/05/03/t-sql-tuesday-006-what-about-blob.aspx . For this one I decided to post a sample Tiger/Line SQL database I use all the time in live demos. For those who aren't familiar with it, Tiger/Line data is a dataset published by the U.S. Census Bureau . Tiger/Line has a lot of nice detailed geospatial data down to a very detailed level. It actually goes from the U.S. state level all the way down to...(read more)

    Read the article

  • how to dinamically add controls in asp.net Dynamic Data

    - by loviji
    Hello, i'm trying to work with asp.NET Dynamic Data. So, I see Dynamic Data not well learned by people as other technologies. now, to my question. Lets us work with Details.aspx page that located on ~\DynamicData\PageTemplates I need to add <asp:DynamicControl runat="server" to page into Form1.detailsTable. i've tried like this: protected DynamicControl myC=new DynamicControl(); protected void Page_Load(object sender, EventArgs e) { foreach(var c in table.Columns) { myC.DataField=c.DisplayName; FormView1.Controls.Add(myC); } } but I can not see the desired result. where is the problem. thanks

    Read the article

  • Import and Export data from SQL Server 2005 to XL Sheet

    - by SAMIR BHOGAYTA
    For uploading the data from Excel Sheet to SQL Server and viceversa, we need to create a linked server in SQL Server. Expample linked server creation: Before you executing the below command the excel sheet should be created in the specified path and it should contain the name of the columns. EXEC sp_addlinkedserver 'ExcelSource2', 'Jet 4.0', 'Microsoft.Jet.OLEDB.4.0', 'C:\Srinivas\Vdirectory\Testing\Marks.xls', NULL, 'Excel 5.0' Once you executed above query it will crate linked server in SQL Server 2005. The following are the Query from sending the data from Excel sheet to SQL Server 2005. INSERT INTO emp SELECT * from OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\text.xls','SELECT * FROM [sheet1$]') The following query is for sending the data from SQL Server 2005 to Excel Sheet. insert into OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=c:\text.xls;', 'SELECT * FROM [sheet1$]') select * from emp

    Read the article

  • POX return data from WCF Data Services

    - by keithwarren7
    I am using WCF Data Services (netfx4) to provide data sourced from SQL via EF, the standard OData mechanism is fine and JSON works as well but I need a third option for generic POX (plain old xml). I have yet to come across a simple strategy or switch that allows me to control this but I am sure one must exist or a workaround method might be available. Any ideas? Ideally I would like to be able to use something like the JSONP option wherein I append 'format=JSON' to the URL, in this case 'format=pox' or 'POX=true' or something of that nature.

    Read the article

  • Data structure: sort and search effectively

    - by Jiten Shah
    I need to have a data structure with say 4 keys . I can sort on any of these keys. What data structure can I opt for? Sorting time should be very little. I thought of a tree, but it will be only help searching on one key. For other keys I'll have to remake the tree on that particular key and then find it. Is there any data structure that can take care of all 4 keys at the same time? these 4 fields are of total 12 bytes and total size for each record - 40 bytes.. have memory constraints too... operations are : insertion, deletion, sorting on different keys.

    Read the article

  • How to dynamically add controls in asp.net Dynamic Data

    - by loviji
    Hello, i'm trying to work with asp.NET Dynamic Data. So, I see asp.NET Dynamic Data not well learned by people as other technologies. now, to my question. Lets us work with Details.aspx page that located on ~\DynamicData\PageTemplates I need to add <asp:DynamicControl runat="server" to page into Form1.detailsTable. i've tried like this: protected DynamicControl myC=new DynamicControl(); protected void Page_Load(object sender, EventArgs e) { foreach(var c in table.Columns) { myC.DataField=c.DisplayName; FormView1.Controls.Add(myC); } } but I can not see the desired result. where is the problem. thanks

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >