Search Results

Search found 63386 results on 2536 pages for 'data structure'.

Page 117/2536 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • How would you implement API key in WCF Data Service?

    - by rushonerok
    Is there a way to require an API key in the URL / or some other way of passing the service a private key in order to grant access to the data? I have this right now... using System; using System.Data.Services; using System.Data.Services.Common; using System.Collections.Generic; using System.Linq; using System.ServiceModel.Web; using Numina.Framework; using System.Web; using System.Configuration; [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class odata : DataService { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { HttpRequest Request = HttpContext.Current.Request; if(Request["apikey"] != ConfigurationManager.AppSettings["ApiKey"]) throw new DataServiceException("ApiKey needed"); base.OnStartProcessingRequest(args); } } ...This works but it's not perfect because you cannot get at the metadata and discover the service through the Add Service Reference explorer. I could check if $metadata is in the url but it seems like a hack. Is there a better way?

    Read the article

  • How to test a class that makes HTTP request and parse the response data in Obj-C?

    - by GuidoMB
    I Have a Class that needs to make an HTTP request to a server in order to get some information. For example: - (NSUInteger)newsCount { NSHTTPURLResponse *response; NSError *error; NSURLRequest *request = ISKBuildRequestWithURL(ISKDesktopURL, ISKGet, cookie, nil, nil); NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; if (!data) { NSLog(@"The user's(%@) news count could not be obtained:%@", username, [error description]); return 0; } NSString *regExp = @"Usted tiene ([0-9]*) noticias? no leídas?"; NSString *stringData = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSArray *match = [stringData captureComponentsMatchedByRegex:regExp]; [stringData release]; if ([match count] < 2) return 0; return [[match objectAtIndex:1] intValue]; } The things is that I'm unit testing (using OCUnit) the hole framework but the problem is that I need to simulate/fake what the NSURLConnection is responding in order to test different scenarios and because I can't relay on the server to test my framework. So the question is Which is the best ways to do this?

    Read the article

  • What makes these two R data frames not identical?

    - by Matt Parker
    UPDATE: I remembered dput() about the time Sharpie mentioned it. It's probably the row names. Back in a moment with an answer. I have two small data frames, this_tx and last_tx. They are, in every way that I can tell, completely identical. this_tx == last_tx results in a frame of identical dimensions, all TRUE. this_tx %in% last_tx, two TRUEs. Inspected visually, clearly identical. But when I call identical(this_tx, last_tx) I get a FALSE. Hilariously, even identical(str(this_tx), str(last_tx)) will return a TRUE. If I set this_tx <- last_tx, I'll get a TRUE. What is going on? I don't have the deepest understanding of R's internal mechanics, but I can't find a single difference between the two data frames. If it's relevant, the two variables in the frames are both factors - same levels, same numeric coding for the levels, both just subsets of the same original data frame. Converting them to character vectors doesn't help. Background (because I wouldn't mind help on this, either): I have records of drug treatments given to patients. Each treatment record essentially specifies a person and a date. A second table has a record for each drug and dose given during a particular treatment (usually, a few drugs are given each treatment). I'm trying to identify contiguous periods during which the person was taking the same combinations of drugs at the same doses. The best plan I've come up with is to check the treatments chronologically. If the combination of drugs and doses for treatment[i] is identical to the combination at treatment[i-1], then treatment[i] is a part of the same phase as treatment[i-1]. Of course, if I can't compare drug/dose combinations, that's right out.

    Read the article

  • Best fit curve for trend line

    - by Dave Jarvis
    Problem Constraints Size of the data set, but not the data itself, is known. Data set grows by one data point at a time. Trend line is graphed one data point at a time (using a spline/Bezier curve). Graphs The collage below shows data sets with reasonably accurate trend lines: The graphs are: Upper-left. By hour, with ~24 data points. Upper-right. By day for one year, with ~365 data points. Lower-left. By week for one year, with ~52 data points. Lower-right. By month for one year, with ~12 data points. User Inputs The user can select: the type of time series (hourly, daily, monthly, quarterly, annual); and the start and end dates for the time series. For example, the user could select a daily report for 30 days in June. Trend Weight To calculate the window size (i.e., the number of data points to average when calculating the trend line), the following expression is used: data points / trend weight Where data points is derived from user inputs and trend weight is 6.4. Even though a trend weight of 6.4 produces good fits, it is rather arbitrary, and might not be appropriate for different user inputs. Question How should trend weight be calculated given the constraints of this problem?

    Read the article

  • Checking data of all same class elements

    - by Tiffani
    I need the code to check the data-name value of all instances of .account-select. Right now it just checks the first .account-select element and not any subsequent ones. The function right now is on click of an element such as John Smith, it checks the data-name of the .account-select lis. If the data-names are the same, it does not create a new li with the John Smith data. If no data-names are equal to John Smith, then it adds an li with John Smith. This is the JS-Fiddle I made for it so you can see what I am referring to: http://jsfiddle.net/rsxavior/vDCNy/22/ Any help would be greatly appreciated. This is the Jquery Code I am using right now. $('.account').click(function () { var acc = $(this).data("name"); var sel = $('.account-select').data("name"); if (acc === sel) { } else { $('.account-hidden-li').append('<li class="account-select" data-name="'+ $(this).data("name") +'">' + $(this).data("name") + '<a class="close bcn-close" data-dismiss="alert" href="#">&times;</a></li>'); } }); And the HTML: <ul> <li><a class="account" data-name="All" href="#">All</a></li> <li><a class="account" data-name="John Smith" href="#">John Smith</a></li> </ul> <ul class="account-hidden-li"> <ul>

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • SQL query recursion for a web-like structure

    - by MickeyD
    I have a table here, named "Foo". The data is set up something like this. ID TableReference DataId0 DataId1 DataId2 -- -------------- ------- ------- ------- 1 Prize 3 4 5 2 Prize 4 5 NULL 3 Cash 1 NULL NULL 4 Prize 8 NULL 12 5 Foo 2 3 NULL 6 Cash 8 1 10 7 Foo 5 1 2 Etc. The data is horribly set up, I know, but I didn't set it up that way. :) I'm only dealing with the after effect. I'm trying to come up with a way to essentially "flatten" the table; that is, to display all the data to a point where the table "Foo" does not reference itself. I'm trying to figure out a sql query that I can do to get there. Usually when I deal with recursion, I have (or can establish) parent IDs and set it up that way, but for this table there are seemingly multiple child and parent IDs creating a web-like structure instead of a hierarchy. So I'm at a loss where to even begin to write a sql query for something like this. Note: There is no infinite looping (where one Foo points to another Foo, which points back to the original Foo) from what I've found. Using t-sql. Thanks for any assistance, if at all possible.

    Read the article

  • Populate a tree from Hierarchical data using 1 LINQ statement

    - by Midhat
    Hi. I have set up this programming exercise. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication2 { class DataObject { public int ID { get; set; } public int ParentID { get; set; } public string Data { get; set; } public DataObject(int id, int pid, string data) { this.ID = id; this.ParentID = pid; this.Data = data; } } class TreeNode { public DataObject Data {get;set;} public List<DataObject> Children { get; set; } } class Program { static void Main(string[] args) { List<DataObject> data = new List<DataObject>(); data.Add(new DataObject(1, 0, "Item 1")); data.Add(new DataObject(2, 0, "Item 2")); data.Add(new DataObject(21, 2, "Item 2.1")); data.Add(new DataObject(22, 2, "Item 2.2")); data.Add(new DataObject(221, 22, "Item 2.2.1")); data.Add(new DataObject(3, 0, "Item 3")); } } } The desired output is a List of 3 treenodes, having items 1, 2 and 3. Item 2 will have a list of 2 dataobjects as its children member and so on. I have been trying to populate this tree (or rather a forest) using just 1 SLOC using LINQ. A simple group by gives me the desired data but the challenge is to organize it in TreeNode objects. Can someone give a hint or an impossibility result for this?

    Read the article

  • Object Design: How to Organize/Structure a "Collection Class"

    - by CrimsonX
    I'm currently struggling to understand how I should organize/structure a class which I have already created. The class does the following: As its input in the constructor, it takes a collection of logs In the constructor it validates and filters the logs through a series of algorithms implementing my business logic After all filtering and validation is complete, it returns a collection (a List) of the valid and filtered logs which can be presented to the user graphically in a UI. Here is some simplified code describing what I'm doing: class FilteredCollection { public FilteredCollection( SpecialArray<MyLog> myLog) { // validate inputs // filter and validate logs in collection // in end, FilteredLogs is ready for access } Public List<MyLog> FilteredLogs{ get; private set;} } However, in order to access this collection, I have to do the following: var filteredCollection = new FilteredCollection( secialArrayInput ); //Example of accessing data filteredCollection.FilteredLogs[5].MyLogData; Other key pieces of input: I foresee only one of these filtered collections existing in the application (therefore should I make it a static class? Or perhaps a singleton?) Testability and flexibility in creation of the object is important (Perhaps therefore I should keep this an instanced class for testability?) I'd prefer to simplify the dereferencing of the logs if at all possible, as the actual variable names are quite long and it takes some 60-80 characters to just get to the actual data. My attempt in keeping this class simple is that the only purpose of the class is to create this collection of validated data. I know that there may be no "perfect" solution here, but I'm really trying to improve my skills with this design and I would greatly appreciate advice to do that. Thanks in advance.

    Read the article

  • Is it Possible to Use Constraints on Hierarchical Data in a Self-Referential Table?

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to cascade on update and prevent deletion of a record if it has any "children." So I used the following: CREATE TABLE `test`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`test`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • Import CSV to class structure as the user defines

    - by Assimilater
    I have a contact manager program and I would like to offer the feature to import csv files. The problem is that different data sources order the fields in different ways. I thought of programming an interface for the user to tell it the field order and how to handle exceptions. Here is an example line in one of many possible field orders: "ID#","Name","Rank","Address1","Address2","City","State","Country","Zip","Phone#","Email","Join Date","Sponsor ID","Sponsor Name" "Z1234","Call, Anson","STU","1234 E. 6578 S.","","Somecity","TX","United States","012345","000-000-0000","[email protected]","5/24/2010","z12343","Quantum Independence" Notice that in one data field "Name" there is a comma to separate last name and first name and in another there is not. My plan is to have a line for each field (ie ID, Name, City etc.) and a statement "import to" and list box with options like: Don't Import, BusinessJoin Date, First Name, Zip and the program recognizes those as properties of an object... I'd also like the user to be able to record preset field orders so they can re-use them for csv files from the same download source. Then I also need it to check if a record all ready exists (is there a record for Anson Call all ready?) and allow the user to tell it what to do if there is a record (ie mailing address may have changes, so if that field is filled overwrite it, or this mailing address is invalid, leave the current data untouched for this person, overwrite the rest). While I'm capable of coding this...i'm not very excited about it and I'm wondering if there's a tool or set of tools out there to all ready perform most of this functionality... I hope this makes sense...

    Read the article

  • Where can I find my iPhone app's Core Data persistent store?

    - by Dr Dork
    I'm diving into iPhone development, so I apologize in advance if this is a ridiculous question, but in a new iPad app project using the Core Data framework, here's the generated code for creating the persistentStoreCoordinator... - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"ApplicationName.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } My questions are... The first time I run the app, is the ApplicationName.sqllite database created automatically if it doesn't exist? If not, when is it created? When data is added to it programmatically? Once the DB does exist, where can I locate the file? I'd like to open it with a different program so I can manually manipulate the data. Thanks so much in advance for your help! I'm going to continue researching these questions right now.

    Read the article

  • is there something equivalent to 'Address of' or offset operator in .net?

    - by Gio
    We have nested stuctures as such, used as an interface for some device drivers. On occasion we have to update individual elements. An 'address of' operator would be helpful, but an 'offset' function or operator is what I'm really looking for, but not sure how to go about it. In other words, how far is structureN.elementX away from the start of the structure in bytes? [StructLayout(LayoutKind.Sequential)] public struct s1 { UInt16 elem1; UInt16 elem2; UInt16 elem3; } [StructLayout(LayoutKind.Sequential)] public struct s2 { UInt16 elem1; UInt16 elem2; UInt16 elem3; } [StructLayout(LayoutKind.Sequential)] public struct driver { public S1 s1; public S2 s2; } For instance we need to send the device driver some data to update driver.s1.elem3, by way of providing an offset address, data block and length. We would update our local copy, then call the device api with the afore mentioned data. Not sure I have to do this with 'unsafe' method calls. Any help?

    Read the article

  • SQL Query to separate data into two fields

    - by Phillip
    I have data in one column that I want to separate into two columns. The data is separated by a comma if present. This field can have no data, only one set of data or two sets of data saperated by the comma. Currently I pull the data and save as a comma delimited file then use an FoxPro to load the data into a table then process the data as needed then I re-insert the data back into a different SQL table for my use. I would like to drop the FoxPro portion and have the SQL query saperate the data for me. Below is a sample of what the data looks like. Store Amount Discount 1 5.95 1 5.95 PO^-479^2 1 5.95 PO^-479^2 2 5.95 2 5.95 PO^-479^2 2 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 In the data above I want to sum the data in the amount field to get a gross amount. Then pull out the specific discount amount which is located between the carat characters and sum it to get the total discount amount. Then add the two together and get the total net amount. The query I want to write will separate the discount field as needed, see store 2 line 3 for two discounts being applied, then pull out the value between carat characters.

    Read the article

  • Can't return data to parent activity

    - by user23
    I'm trying to return data (the position of the item picked from the grid) to parent activity, but my code fails. The debbuger shows how 'data' gets right the key and data in the "data.putExtra("POS_ICON", position)" at child activity, but after in onActivityResult() at parent activity the debbuger shows 'data' with no key nor data returned...it's like if data loses its content. I've followed other posts and tutorials but no way. Please help. Parent activity: public void selIcono(View v){ Intent intent = new Intent (this, SelIconoActivity.class); startActivityForResult(intent,PICK_ICON_REQUEST); } protected void onActivityResult(int requestCode, int resultCode, Intent data) { //here's the problem: no data is returned!! if (requestCode == PICK_ICON_REQUEST) { if (resultCode == RESULT_OK) { // An icon was picked. putIcon(data.getIntExtra("POS_ICON", -1)); } } } Child activity: public class SelIconoActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_sel_icono); GridView gridview = (GridView)findViewById(R.id.gr_iconos); gridview.setAdapter(new ImageAdapter (this)); gridview.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View v, int position, long id) { Intent data = new Intent(); data.putExtra("POS_ICON", position); setResult(Activity.RESULT_OK, data); finish(); } }); } }

    Read the article

  • How to recover deleted files?

    - by vijay.shad
    Hi My laptop has two os. one is windows vista. and other is Ubuntu. I am currently on ubuntu system, this is my primary OS. There are 4 partitions of my hard disk Windows OS Linux(Ubuntu OS) Data Now the problem part. The data partition is NTFS. I have mounted this partition on the location /media/windrive-a under ubuntu OS. A little while back i decided to delete the mounting of the data partition and i fired command rm -r /media/windrive-a/. To give me a shock; all my data on data drive is gone. Now, I know this is not the command to remove mounted partition. But I have committed the wrong. Is there any way i can get my data back. These are very important data for me. Please suggest.

    Read the article

  • What's the right way to do mutable data structures (e.g., skip lists, splay trees) in F#?

    - by dan
    What's a good way to implement mutable data structures in F#? The reason I’m asking is because I want to go back and implement the data structures I learned about in the algorithms class I took this semester (skip lists, splay trees, fusion trees, y-fast tries, van Emde Boas trees, etc.), which was a pure theory course with no coding whatsoever, and I figure I might as well try to learn F# while I’m doing it. I know that I “should” use finger trees to get splay tree functionality in a functional language, and that I should do something with laziness to get skip-list functionality, etc. , but I want to get the basics nailed down before I try playing with purely functional implementations. There are lots of examples of how to do functional data structures in F#, but there isn’t much on how to do mutable data structures, so I started by fixing up the doubly linked list here into something that allows inserts and deletes anywhere. My plan is to turn this into a skip list, and then use a similar structure (discriminated union of a record) for the tree structures I want to implement. Before I start on something more substantial, is there a better way to do mutable structures like this in F#? Should I just use records and not bother with the discriminated union? Should I use a class instead? Is this question "not even wrong"? Should I be doing the mutable structures in C#, and not dip into F# until I want to compare them to their purely functional counterparts? And, if a DU of records is what I want, could I have written the code below better or more idiomatically? It seems like there's a lot of redundancy here, but I'm not sure how to get rid of it. module DoublyLinkedList = type 'a ll = | None | Node of 'a ll_node and 'a ll_node = { mutable Prev: 'a ll; Element : 'a ; mutable Next: 'a ll; } let insert x l = match l with | None -> Node({ Prev=None; Element=x; Next=None }) | Node(node) -> match node.Prev with | None -> let new_node = { Prev=None; Element=x; Next=Node(node)} node.Prev <- Node(new_node) Node(new_node) | Node(prev_node) -> let new_node = { Prev=node.Prev; Element=x; Next=Node(node)} node.Prev <- Node(new_node) prev_node.Next <- Node(new_node) Node(prev_node) let rec nth n l = match n, l with | _,None -> None | _,Node(node) when n > 0 -> nth (n-1) node.Next | _,Node(node) when n < 0 -> nth (n+1) node.Prev | _,Node(node) -> Node(node) //hopefully only when n = 0 :-) let rec printLinkedList head = match head with | None -> () | Node(x) -> let prev = match x.Prev with | None -> "-" | Node(y) -> y.Element.ToString() let cur = x.Element.ToString() let next = match x.Next with | None -> "-" | Node(y) -> y.Element.ToString() printfn "%s, <- %s -> %s" prev cur next printLinkedList x.Next

    Read the article

  • Updating D3 column chart with different values and different data sizes

    - by mbeasley
    Background I am attempting to create a reusable chart object with D3.js. I have setup a chart() function that will produce a column chart. On a click event on any of the columns, the chart will update with a new random data array that will contain a random number of data points (i.e. the original chart could have 8 columns, but upon update, could have 20 columns or 4 columns). Problem Say I have 8 data points (and thus 8 columns) in my original dataset. When I update the chart with random data, the columns appropriately adjust their height to the new values - but new bars aren't added. Additionally, while the width of the columns appropriately adjust to accommodate the width of the container and the new number of data points, if that number of data points is less than the original set, then some of those columns from the original dataset will linger until the number of data points is greater than or equal than the original. My end goal is to have new data dynamically added or old data outside of the range of the new data count dynamically removed. I've created a jsfiddle of the behavior. You may have to click the columns a couple of times to see the behavior I'm describing. Additionally, I've pasted my code below. Thanks in advance! function chart(config) { // set default options var defaultOptions = { selector: '#chartZone', class: 'chart', id: null, data: [1,2,6,4, 2, 6, 7, 2], type: 'column', width: 200, height: 200, callback: null, interpolate: 'monotone' }; // fill in unspecified settings in the config with the defaults var settings = $.extend(defaultOptions, config); function my() { // generate chart with this function var w = settings.width, h = settings.height, barPadding = 3, scale = 10, max = d3.max(settings.data); var svg = d3.select(settings.selector) // create the main svg container .append("svg") .attr("width",w) .attr("height",h); var y = d3.scale.linear().range([h, 0]), yAxis = d3.svg.axis().scale(y).ticks(5).orient("left"), x = d3.scale.linear().range([w, 0]); y.domain([0, max]).nice(); x.domain([0, settings.data.length - 1]).nice(); var rect = svg.selectAll("rect") .data(settings.data) .enter() .append("rect") .attr("x", function(d,i) { return i * (w / settings.data.length); }) .attr("y", function(d) { return h - h * (d / max); }) .attr("width", w / settings.data.length - barPadding) .attr("height", function(d) { return h * (d / max); }) .attr("fill", "rgb(90,90,90)"); svg.append("svg:g") .attr("class", "y axis") .attr("transform", "translate(-4,0)") .call(yAxis); svg.on("click", function() { var newData = [], maxCap = Math.round(Math.random() * 100); for (var i = 0; i < Math.round(Math.random()*100); i++) { var newNumber = Math.random() * maxCap; newData.push(Math.round(newNumber)); } newMax = d3.max(newData); y.domain([0, newMax]).nice(); var t = svg.transition().duration(750); t.select(".y.axis").call(yAxis); rect.data(newData) .transition().duration(750) .attr("height", function(d) { return h * (d / newMax); }) .attr("x", function(d,i) { return i * (w / newData.length); }) .attr("width", w / newData.length - barPadding) .attr("y", function(d) { return h - h * (d / newMax); }); }); } my(); return my; } var myChart = chart();

    Read the article

  • Using Oracle Data in the Business Rules Engine

    - by Christopher House
    Yesterday I started working on some new functionality that I had planned to implement using the Business Rules Engine.  As I got further into it, I realized that some of my rules were going to need to reference some data that resides in an Oracle database.  I knew the Business Rules Composer supports using DataConnections and TypedDataTables, but I’d never used this functionality myself, so I wasn’t so sure how it would work with Oracle.  As it turns out, it’s very do-able, there’s just little hoop you need to jump through. I fired up BRC and my suspicions were quickly confirmed.  BRC only recognizes SQL Server databases when it comes to editing rules.  Not letting that deter me, I decided to see if I could “trick” BRE into using Oracle data. On my local SQL server, I created a new database and in that database, created a table that matched the schema of the table I wanted to use in the Oracle database.  I then set about creating my rules, referencing the new SQL Server database everywhere I wanted to use Oracle data.  Finally, I created a new class library and added a class that implements Microsoft.RuleEngine.IFactRetriever.  In that class, I added the necessary code to get a DataSet from the Oracle server, wrap it in a TypedDataTable and assert it into the rule engine.  It’s worth pointing out that in my IFactRetriever class, I made sure to set my DataSet name to the name of the database I’d referenced in the BRC and the DataTable’s name to the name of the table that I’d referenced in the BRC. After gac’ing the new class library and deploying my policy, I tested and everything worked as expected.

    Read the article

  • How does Comparison Sites work?

    - by Vijay
    Need your thinking on how does these Comparision Sites actually work. Sites like Junglee.com policybazaar.com and there are many like these which provides comaprision of products , fares etc. grabbed from different websites. I had read a little about it and what i found is-: These sites uses Feeds of the sites data. These sites uses APIs of the sites which are actually provided by those sites. And for some sites which do not have any of these two posibility then the Comparision sites uses web-crawler to crawl their data. This is what i have found out. If you think there is more things to it please do give your own views. But i want to know these for my learning purpose and a little for curiosity- how does they actually matches the crawled data , feeds, and other so that there is no duplicacy. What is the process or algorithms for it. And where should i go to learn these concepts. References for books , articles or anything else.

    Read the article

  • Restoring an Ubuntu Server using ZFS RAIDZ for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAIDZ disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAIDZ array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAIZ array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • Can I use Ubuntu One to sync data fiies between two remote computers

    - by Sleepy John
    I've got two computers, both running Ubuntu with files in their home folders sync'd in to Ubuntu One. I'd like to know if it's possible to make Ubuntu One automatically download data changes that have been uploaded automatically to Ubuntu One from one computer to the equivalent data file in the other. Clarifying a bit further, I've installed Red Notebook in both computers and so they each have their own /.rednotebook/data folder containing a series of .txt files corresponding to the monthly entries in each of them. These are sync'd to upload any changes to those .txt files to Ubuntu One. My question is can I, and if so how, do I make Ubuntu One automatically download and replace those .txt files in the other computer after they've been updated and uploaded from the first computer? I did labouriously manage to download all those text files which had been uploaded from the first computer, from Ubuntu One one-by-one to the second computer, but what I want to do is automate this process and that's where I'm stuck. I'm aware that things could get a bit complicated if both my computers were on-line at the same time and both were simultaneously making different Red Notebook entries, so that's not the scenario I'm trying to cover. All I want to achieve is that whatever updates to the files have been uploaded by one computer, will automatically be downloaded to the same-named files in the other computer as soon as that second computer appears on line and detects that Ubuntu One has matching but more recent sync'd files than the ones it's holding.

    Read the article

  • AT&T’s new prepaid plan for smartphones –$65 for 1 GB data and unlimited calls, text

    - by Gopinath
    AT&T is stepping up competition in prepaid mobile plans offering and trying to attract more smartphone customers who are not interested to lock in with expensive contracts. Today AT&T announced a new prepaid plan for smartphone customers which offers 1 GB of , unlimited calls and text for $65 a month. Compared to existing plans that offers same , the new plan saves $10 per month and rates are comparable to T-Mobile prepaid service. The new plan will be available to all prepaid customers from October 7, 2012. I’m using AT&T prepaid plan for the past 3 months and paying $75 for 1 GB data, unlimited calls. Few days ago I did some analysis on prepaid plans offered by various network providers and found T-Mobile has cheapest plans that suits my needs – $60 for 2 GB data,  unlimited calls and texts. Even though T Mobile’s network coverage is not as great as AT&T in the area where I live, I planned to switch to save $15 per month. After reading today’s announcement, I don’t think that I’ll switch to T Mobile for saving $5 + 1 GB of extra data.  Thanks AT&T for the new plan, I’ll stay with you for now. via engadget

    Read the article

  • Explaining the difference between OData & RDF by way of analogy

    - by jamiet
    A couple of months back I wrote a blog post entitled Microsoft, OData and RDF where I gave a high level view of the OData protocol and how it compares to RDF. I talked about linked data, triples and such like which may have been somewhat useful however jargon-heavy. Earlier today Dr Michael Hausenblas (blog | twitter) offered an analogy which I think is probably more useful and with Michael's permission I'm re-posting it here:Imagine a Web (a Web of Documents, if you wish), which is not based on HTML and hyperlinks, but on MS Word documents. The documents are all available on the Internet, so you can download them and consume the content. But after you’re done with a certain document that talks about a book, how do you learn more about it? For example, reviews about the book or where you can purchase it? Maybe the original document mentions that there is some more related information on another server. So you’d need to go there and look for the related bit of information yourself. You see? That’s what the Web is great at – you just click on a hyperlink and it takes you to the document (or section) you’re interested in. All the legwork is taken care of for you through HTML, URIs and HTTP.Hm, right, but how is this related to OData? Well, OData feels a bit like the above mentioned scenario, just concerning data. Of course you – well actually rather a software program I guess – can consume it (a single source), but that’s it.from Oh – it is data on the Web by Michael Hausenblas I believe that OData has loads of use cases but its important to understand its limitations as well and I think Michael has done a good job of explaining those limitations.@Jamiet   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >