Search Results

Search found 69357 results on 2775 pages for 'data oriented design'.

Page 403/2775 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Pruning data for better viewing on loglog graph - Matlab

    - by Geodesic
    Hi Guys, just wondering if anyone has any ideas about an issue I'm having. I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge ( 1e3) it's really difficult to see what data is what. There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution. What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one? My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades): //%grab indicies per decade ind12 = find(y >= 1e1 & y <= 1e2); indlow = find(y < 1e2); indhigh = find(y > 1e4); ind23 = find(y >+ 1e2 & y <= 1e3); ind34 = find(y >+ 1e3 & y <= 1e4); //%We want ind12 indexes in this decade, find spacing tot23 = round(length(ind23)/length(ind12)); tot34 = round(length(ind34)/length(ind12)); //%grab ones to keep ind23keep = ind23(1):tot23:ind23(end); ind34keep = ind34(1):tot34:ind34(end); indnew = [indlow' ind23keep ind34keep indhigh']; loglog(x(indnew), y(indnew)); But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale. Any ideas on how I can do this?

    Read the article

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

  • C++ Class Templates (Queue of a class)

    - by Dalton Conley
    Ok, so I have my basic linked Queue class with basic functions such as front(), empty() etc.. and I have transformed it into a template. Now, I also have a class called Student. Which holds 2 values: Student name and Student Id. I can print out a student with the following code.. Student me("My Name", 2); cout << me << endl; Here is my display function for student: void display(ostream &out) const { out << "Student Name: " << name << "\tStudent Id: " << id << "\tAddress: " << this << endl; } Now it works fine, you can see the basic output. Now I'm declaring a queue like so.. Queue<Student> qstu; Storing data in this queue is fine, I can add new values and such.. now what I'm trying to do is print out my whole queue of students with: cout << qstu << endl; But its simply returning an address.. here is my display function for queues. void display(ostream & out) const { NodePointer ptr; ptr = myFront; while(ptr != NULL) { out << ptr->data << " "; ptr = ptr->next; } out << endl; } Now, based on this, I assume ptr-data is a Student type and I would assume this would work, but it doesn't. Is there something I'm missing? Also, when I Try: ptr->data.display(out); (Making the assumtion ptr-data is of type student, it does not work which tells me I am doing something wrong. Help on this would be much appreciated!

    Read the article

  • Save HashMap data into SQLite

    - by Matthew
    I'm Trying to save data from Json into SQLite. For now I keep the data from Json into HashMap. I already search it, and there's said use the ContentValues. But I still don't get it how to use it. I try looking at this question save data to SQLite from json object using Hashmap in Android, but it doesn't help a lot. Is there any option that I can use to save the data from HashMap into SQLite? Here's My code. MainHellobali.java // Hashmap for ListView ArrayList<HashMap<String, String>> all_itemList; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main_helloballi); all_itemList = new ArrayList<HashMap<String, String>>(); // Calling async task to get json new getAllItem().execute(); } private class getAllItem extends AsyncTask<Void, Void, Void> { @Override protected Void doInBackground(Void... arg0) { // Creating service handler class instance ServiceHandler sh = new ServiceHandler(); // Making a request to url and getting response String jsonStr = sh.makeServiceCall(url, ServiceHandler.GET); Log.d("Response: ", "> " + jsonStr); if (jsonStr != null) { try { all_item = new JSONArray(jsonStr); // looping through All Contacts for (int i = 0; i < all_item.length(); i++) { JSONObject c = all_item.getJSONObject(i); String item_id = c.getString(TAG_ITEM_ID); String category_name = c.getString(TAG_CATEGORY_NAME); String item_name = c.getString(TAG_ITEM_NAME); // tmp hashmap for single contact HashMap<String, String> allItem = new HashMap<String, String>(); // adding each child node to HashMap key => value allItem.put(TAG_ITEM_ID, item_id); allItem.put(TAG_CATEGORY_NAME, category_name); allItem.put(TAG_ITEM_NAME, item_name); // adding contact to contact list all_itemList.add(allItem); } } catch (JSONException e) { e.printStackTrace(); } } else { Log.e("ServiceHandler", "Couldn't get any data from the url"); } return null; } } I have DatabasehHandler.java and AllItem.java too. I can put it in here if its necessary. Thanks before ** Add Edited Code ** // looping through All Contacts for (int i = 0; i < all_item.length(); i++) { JSONObject c = all_item.getJSONObject(i); String item_id = c.getString(TAG_ITEM_ID); String category_name = c.getString(TAG_CATEGORY_NAME); String item_name = c.getString(TAG_ITEM_NAME); DatabaseHandler databaseHandler = new DatabaseHandler(this); //error here "The Constructor DatabaseHandler(MainHellobali.getAllItem) is undefined }

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • Is it ok to dynamic cast "this" as a return value?

    - by Panayiotis Karabassis
    This is more of a design question. I have a template class, and I want to add extra methods to it depending on the template type. To practice the DRY principle, I have come up with this pattern (definitions intentionally omitted): template <class T> class BaseVector: public boost::array<T, 3> { protected: BaseVector<T>(const T x, const T y, const T z); public: bool operator == (const Vector<T> &other) const; Vector<T> operator + (const Vector<T> &other) const; Vector<T> operator - (const Vector<T> &other) const; Vector<T> &operator += (const Vector<T> &other) { (*this)[0] += other[0]; (*this)[1] += other[1]; (*this)[2] += other[2]; return *dynamic_cast<Vector<T> * const>(this); } } template <class T> class Vector : public BaseVector<T> { public: Vector<T>(const T x, const T y, const T z) : BaseVector<T>(x, y, z) { } }; template <> class Vector<double> : public BaseVector<double> { public: Vector<double>(const double x, const double y, const double z); Vector<double>(const Vector<int> &other); double norm() const; }; I intend BaseVector to be nothing more than an implementation detail. This works, but I am concerned about operator+=. My question is: is the dynamic cast of the this pointer a code smell? Is there a better way to achieve what I am trying to do (avoid code duplication, and unnecessary casts in the user code)? Or am I safe since, the BaseVector constructor is private?

    Read the article

  • Meta Search Engine Architecture

    - by Loki
    The question wasn't clear enough, I think; here's an updated straight to the point question: What are the common architectures used in building a meta search engine and is there any libraries available to build that type of search engine? I'm looking at building an "enterprise" type of search engine where the indexed data could be coming from proprietary (like Autonomy or a Google Box) or public search engines (like Google Web or Yahoo Web).

    Read the article

  • Summarising (permanently) data in a SQL table

    - by Cylindric
    Geetings, Stackers. I have a huge number of data-points in a SQL table, and I want to summarise them in a way reminiscent of RRD. Assuming a table such as ID | ENTITY_ID | SCORE_DATE | SCORE | SOME_OTHER_DATA ----+-----------+------------+-------+----------------- 1 | A00000001 | 01/01/2010 | 100 | some data 2 | A00000002 | 01/01/2010 | 105 | more data 3 | A00000003 | 01/01/2010 | 104 | various text ... | ......... | .......... | ..... | ... ... | A00009999 | 01/01/2010 | 101 | ... | A00000001 | 02/01/2010 | 104 | ... | A00000002 | 02/01/2010 | 119 | ... | A00000003 | 02/01/2010 | 119 | ... | ......... | .......... | ..... | ... | A00009999 | 02/01/2010 | 101 | arbitrary data ... | ......... | .......... | ..... | ... ... | A00000001 | 01/02/2010 | 104 | ... | A00000002 | 01/02/2010 | 119 | ... | A00000003 | 01/01/2010 | 119 | I want to end up with one record per entity, per month: ID | ENTITY_ID | SCORE_DATE | SCORE | ----+-----------+------------+-------+ ... | A00000001 | 01/01/2010 | 100 | ... | A00000002 | 01/01/2010 | 105 | ... | A00000003 | 01/01/2010 | 104 | ... | A00000001 | 01/02/2010 | 100 | ... | A00000002 | 01/02/2010 | 105 | ... | A00000003 | 01/02/2010 | 104 | (I Don't care about the SOME_OTHER_DATA - I'll pick something - either the first or last record probably.) What's an easy way of doing this on a regular basis, so that anything in the last calendar month is summarised in this way? At the moment my plan is kind of: For each EntityID For each month Find average score for all records in given month Update first record with results of previous step Delete all records that aren't the first I can't think of a neat way of doing it though, that doesn't involve lots of updates and iteration. This can either be done in a SQL Stored Procedure, or it can be incorporated into the .Net app that's generating this data, so the solution doesn't really need to be "one big SQL script", but can be :) (SQL-2005)

    Read the article

  • How big is too big (for NTFS)

    - by BCS
    I have a program and as it's done now, it has a data directory with something like 10-30K files in it and it's starting to cause problems. Should I expect that to cause problems and my only solution to tweak my file structure or does that indicate other problems?

    Read the article

  • Is there a recommended approach to handle saving data in response to within-site navigation without

    - by Carvell Fenton
    Hello all, Preamble to scope my question: I have a web app (or site, this is an internal LAN site) that uses jQuery and AJAX extensively to dynamically load the content section of the UI in the browser. A user navigates the app using a navigation menu. Clicking an item in the navigation menu makes an AJAX call to php, and php then returns the content that is used to populate the central content section. One of the pages served back by php has a table form, set up like a spreadsheet, that the user enters values into. This table is always kept in sync with data in the database. So, when the table is created, is it populated with the relevant database data. Then when the user makes a change in a "cell", that change immediately is written back to the database so the table and database are always in sync. This approach was take to reassure users that the data they entered has been saved (long story...), and to alleviate them from having to click a save button of some kind. So, this always in sync idea is great, except that a user can enter a value in a cell, not take focus out of the cell, and then take any number of actions that would cause that last value to be lost: e.g. navigate to another section of the site via the navigation menu, log out of the app, close the browser, etc. End of preamble, on to the issue: I initially thought that wasn't a problem, because I would just track what data was "dirty" or not saved, and then in the onunload event I would do a final write to the database. Herein lies the rub: because of my clever (or not so clever, not sure) use of AJAX and dynamically loading the content section, the user never actually leaves the original url, or page, when the above actions are taken, with the exception of closing the browser. Therefore, the onunload event does not fire, and I am back to losing the last data again. My question, is there a recommended way to handle figuring out if a person is navigating away from a "section" of your app when content is dynamically loaded this way? I can come up with a solution I think, that involves globals and tracking the currently viewed page, but I thought I would check if there might be a more elegant solution out there, or a change I could make in my design, that would make this work. Thanks in advance as always!

    Read the article

  • Foreign key pointing to different tables

    - by Álvaro G. Vicario
    I'm implementing a table per subclass design I discussed in a previous question. It's a product database where products can have very different attributes depending on their type, but attributes are fixed for each type and types are not manageable at all. I have a master table that holds common attributes: product_type ============ product_type_id INT product_type_name VARCHAR E.g.: 1 'Magazine' 2 'Web site' product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id valid_since DATETIME valid_to DATETIME E.g. 1 'Foo Magazine' 1 '1998-12-01' NULL 2 'Bar Weekly Review' 1 '2005-01-01' NULL 3 'E-commerce App' 2 '2009-10-15' NULL 4 'CMS' 2 '2010-02-01' NULL ... and one subtable for each product type: item_magazine ============= item_magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id issue_number INT pages INT copies INT close_date DATETIME release_date DATETIME E.g. 1 'Foo Magazine Regular Issue' 1 89 52 150000 '2010-06-25' '2010-06-31' 2 'Foo Magazine Summer Special' 1 90 60 175000 '2010-07-25' '2010-07-31' 3 'Bar Weekly Review Regular Issue' 2 12 16 20000 '2010-06-01' '2010-06-02' item_web_site ============= item_web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id bandwidth INT hits INT date_from DATETIME date_to DATETIME E.g. 1 'The Carpet Store' 3 10 90000 '2010-06-01' NULL 2 'Penauts R Us' 3 20 180000 '2010-08-01' NULL 3 'Springfield Cattle Fair' 4 15 150000 '2010-05-01' '2010-10-31' Now I want to add some fees that relate to one specific item. Since there are very little subtypes, it's feasible to do this: fee === fee_id INT fee_description VARCHAR item_magazine_id INT -> Foreign key to item_magazine.item_magazine_id item_web_site_id INT -> Foreign key to item_web_site.item_web_site_id net_price DECIMAL E.g.: 1 'Front cover' 2 NULL 1999.99 2 'Half page' 2 NULL 500.00 3 'Square banner' NULL 3 790.50 4 'Animation' NULL 3 2000.00 I have tight foreign keys to handle cascaded editions and I presume I can add a constraint so only one of the IDs is NOT NULL. However, my intuition suggests that it would be cleaner to get rid of the item_WHATEVER_id columns and keep a separate table: fee_to_item =========== fee_id INT -> Foreign key to fee.fee_id product_id INT -> Foreign key to product.product_id item_id INT -> ??? But I can't figure out how to create foreign keys on item_id since the source table varies depending on product_id. Should I stick to my original idea?

    Read the article

  • How to calculate a measure of a total error in this clustering

    - by Vafello
    I have the following points and clustering of data S1. Can anyone tell me how to calculate the total error associated with this clustering? I know it's not a strictly programming question, but I need it for my algorithm. I think the answer should be 4/3 but I have no idea how to calculate this. Can anyone help me? x1= (2.0,1.0) x2= (2.0,2.0) x3= (1.0,2.0) S1={ x1, x2, x3 }

    Read the article

  • Would ViewModels fit in the Model View Presenter pattern?

    - by Jonn
    Having used ViewModels in MVC, I was wondering if applying the same to the MVP pattern is practical. I only have a few considerations, one being that MVP is already fairly hard to implement (with all the additional coding, not much on the seeming complexity) or that ViewModels already have a slightly similar way of modeling data or entities. Would adding another layer in the form of ViewModels be redundant or is it a logical abstraction that I, as one implementing the MVP pattern, should adhere to?

    Read the article

  • Grid sorting with persistent master sort

    - by MikeWyatt
    I have a UI with a grid. Each record in the grid is sorted by a "master" sort column, let's call it a page number. Each record is a story in a magazine. I want the user to be able to drag and drop a record to a new position in the grid and automatically update the page number field to reflect the updated position. Easy enough, right? Now imagine that I also want to have the grid sortable by any other column (story title, section, author name, etc.). How does the drag and drop operation work now? Revert to page number sort during or after the drag and drop operation? This could confuse the user (why did my sort just change?). It would also result in arbitrary row positioning. Would the story now be before the row that was after it when the user dropped it? Or, would it be after the row that was before it? Those rows may now be widely separated after the master order sort. Disable the drag and drop feature if the grid isn't currently sorted by the page number? This would be easy, but the user might wonder why he can't drag and drop at certain times. Knowing to first sort by page number may not be very intuitive. Let the user rearrange his rows, but not make any changes to the page number? Require the user to enter a "Arrange Stories" mode, in which the grid sort is temporarily switched to page number and drag and drop is enabled? They would then exit the mode, and the previous sort would be reapplied. The big difference between this and the second option is that it would be more explicit than simply clicking on a column header. Any other ideas, or reasons why one of the above is the way to go? EDIT I'd like to point out that any of the above is technically possible, and easy to implement. My question is design-related. What is the most intuitive way to solve this problem, from the user's perspective?

    Read the article

  • Patterns: Local Singleton vs. Global Singleton?

    - by Mike Rosenblum
    There is a pattern that I use from time to time, but I'm not quite sure what it is called. I was hoping that the SO community could help me out. The pattern is pretty simple, and consists of two parts: A singleton factory, which creates objects based on the arguments passed to the factory method. Objects created by the factory. So far this is just a standard "singleton" pattern or "factory pattern". The issue that I'm asking about, however, is that the singleton factory in this case maintains a set of references to every object that it ever creates, held within a dictionary. These references can sometimes be strong references and sometimes weak references, but it can always reference any object that it has ever created. When receiving a request for a "new" object, the factory first searches the dictionary to see if an object with the required arguments already exits. If it does, it returns that object, if not, it returns a new object and also stores a reference to the new object within the dictionary. This pattern prevents having duplicative objects representing the same underlying "thing". This is useful where the created objects are relatively expensive. It can also be useful where these objects perform event handling or messaging - having one object per item being represented can prevent multiple messages/events for a single underlying source. There are probably other reasons to use this pattern, but this is where I've found this useful. My question is: what to call this? In a sense, each object is a singleton, at least with respect to the data it contains. Each is unique. But there are multiple instances of this class, however, so it's not at all a true singleton. In my own personal terminology, I tend to call the factory method a "global singleton". I then call the created objects "local singletons". I sometimes also say that the created objects have "reference equality", meaning that if two variables reference the same data (the same underlying item) then the reference they each hold must be to the same exact object, hence "reference equality". But these are my own invented terms, and I am not sure that they are good ones. Is there standard terminology for this concept? And if not, could some naming suggestions be made? Thanks in advance...

    Read the article

  • jQuery Form plugin - no data from file upload?

    - by pojo
    I've been struggling a bit with the jQuery Form plugin. I want to create a file upload form that posts the data (JSON, from the chosen file) into a REST service exposed by a servlet. The URL for the POST is calculated from what the user chooses in a SELECT dropdown. When the upload is complete, I want to notify the user immediately, AJAX-style. The problem is that the POST header has a Content-Length of 0 and contains no data. I would appreciate any help! <html> <head> <script type="text/javascript" src="js/jquery-1.4.2.min.js">/* ppp */</script> <script type="text/javascript" src="js/jquery.form.js">/* ppp */</script> <script type="text/javascript"> function cb_beforesubmit (arr, $form, options) { // This should override the form's action attribute options.url = "/rest/services/" + $('#selectedaction')[0].value; return true; } function cb_success (rt, st, xhr, wf) { $('#response').html(rt + '<br>' + st + '<br>' + xhr); } $(document).ready(function () { var options = { beforeSubmit: cb_beforesubmit, success: cb_success, dataType: 'json', contentType: 'application/json', method: 'POST', }; $('#myform').ajaxForm(options); $.getJSON('/rest/services', function (data, ts) { for (var property in data) { if (typeof property == 'string') { $('#selectedaction').append('<option>' + property + '</option>'); } } }); }); </script> </head> <body> <form id="myform" action="/rest/services/foo1" method="POST" enctype="multipart/form-data"> <!-- The form does not seem to submit at all if I don't set action to a default value? !--> <select id="selectedaction"> <script type="text/javascript"> </script> </select> <input type="file" value="Choose"/> <input type="submit" value="Submit" /> </form> <div id="response"> </div> </body> </html>

    Read the article

  • Pattern for creating a database schema using JDBC

    - by Space_C0wb0y
    I have a Java-application that loads data from a legacy file format into an SQLite-Database using JDBC. If the database file specified does not exist, it is supposed to create a new one. Currently the schema for the database is hardcoded in the application. I would much rather have it in a separate file as an SQL-Script, but apparently there is now easy way to execute an SQL-Script though JDBC. Is there any other way or a pattern to achieve something like this?

    Read the article

  • How best can I extract a logical model from a physical DB model

    - by Dean
    We have made substantial changes to our physical DB, now as it is the ne dof the project I would like to abstract a logical model from this, to allow me to generate schemas for both Oracle and SQL Server. Can anyone guide me as to the best way to achieve this. I was hoping TOAD data modeller would help but I can't seem to see any options to do what I require?

    Read the article

  • Create lags with a for-loop in R

    - by cptn
    I've got a data.frame with stock data of several companies (here it's only two). I want 10 additional columns in my stock data.frame df with lagged dates (from -5 days to +5 days) for both companies in my event data.frame. I'm using a for loop which is probably not the best solution, but it works partially. DATE <- c("01.01.2000","02.01.2000","03.01.2000","06.01.2000","07.01.2000","09.01.2000","10.01.2000","01.01.2000","02.01.2000","04.01.2000","06.01.2000","07.01.2000","09.01.2000","10.01.2000") RET <- c(-2.0,1.1,3,1.4,-0.2, 0.6, 0.1, -0.21, -1.2, 0.9, 0.3, -0.1,0.3,-0.12) COMP <- c("A","A","A","A","A","A","A","B","B","B","B","B","B","B") df <- data.frame(DATE, RET, COMP, stringsAsFactors=F) df # DATE RET COMP # 1 01.01.2000 -2.00 A # 2 02.01.2000 1.10 A # 3 03.01.2000 3.00 A # 4 06.01.2000 1.40 A # 5 07.01.2000 -0.20 A # 6 09.01.2000 0.60 A # 7 10.01.2000 0.10 A # 8 01.01.2000 -0.21 B # 9 02.01.2000 -1.20 B # 10 04.01.2000 0.90 B # 11 06.01.2000 0.30 B # 12 07.01.2000 -0.10 B # 13 09.01.2000 0.30 B # 14 10.01.2000 -0.12 B this loop works fine comp <- as.vector(unique(df$COMP)) mylist <- vector('list', length(comp)) # create lags in DATE for(i in 1:length(comp)) { print(i) comp_i <- comp[i] df_k <- df[df$COMP %in% comp_i, ] # all trading days of one firm df_k <- transform(df_k, DATEm1 = c(NA, head(DATE, -1)), DATEm2 = c(NA, NA, head(DATE, -2)), DATEm3 = c(NA, NA, NA, head(DATE, -3)), DATEm4 = c(NA, NA, NA, NA,head(DATE, -4)), DATEm5 = c(NA, NA, NA, NA, NA, head(DATE, -5)), DATEp1 = c(DATE[-1], NA)) #DATEp2 = c(DATE[-2], NA, NA), #DATEp3 = c(DATE[-3], NA, NA, NA), #DATEp4 = c(DATE[-4], NA, NA, NA, NA), #DATEp5 = c(DATE[-5], NA, NA, NA, NA, NA)) mylist[[i]] = df_k } df1 <- do.call(rbind, mylist) But if I add the lines with DATEp2, DATEp3, DATEp4, DATEp5. the code doesn't work. Can anybody tell me what I'm doing wrong here? Here the code with all the lagged dates. # create lags in DATE for(i in 1:length(comp)) { print(i) comp_i <- comp[i] df_k <- df[df$COMP %in% comp_i, ] # all trading days of one firm df_k <- transform(df_k, DATEm1 = c(NA, head(DATE, -1)), DATEm2 = c(NA, NA, head(DATE, -2)), DATEm3 = c(NA, NA, NA, head(DATE, -3)), DATEm4 = c(NA, NA, NA, NA,head(DATE, -4)), DATEm5 = c(NA, NA, NA, NA, NA, head(DATE, -5)), DATEp1 = c(DATE[-1], NA), DATEp2 = c(DATE[-2], NA, NA), DATEp3 = c(DATE[-3], NA, NA, NA), DATEp4 = c(DATE[-4], NA, NA, NA, NA), DATEp5 = c(DATE[-5], NA, NA, NA, NA, NA)) mylist[[i]] = df_k } df1 <- do.call(rbind, mylist)

    Read the article

  • To share a table or not share?

    - by acidzombie24
    Right now on my (beta) site i have a table called user data which stores name, hash(password), ipaddr, sessionkey, email and message number. Now i would like the user to have a profile description, signature, location (optional) and maybe other things. Should i have this in a separate mysql table? or should i share the table? and why?

    Read the article

  • Inbreeding-immune database structure

    - by Nick Savage
    I have an application that requires a "simple" family tree. I would like to be able to perform queries that will give me data for an entire family given one id from a member in the family. I say simple because it does not need to take into account adoption or any other obscurities. The requirements for the application are as follows: Any two people will not be able to breed if they're from the same genetic line Needs to allow for the addition of new family lines (new people with no previous family) Need to be able to pull siblings, parents separately through queries I'm having trouble coming up with the proper structure for the database. So far I've come up with two solutions but they're not very reliable and will probably get out of hand quite quickly. Solution 1 involves placing a family_ids field on the people table and storing a list of unique family ids. Each time two people breed the lists are checked against each other to make sure no ids match and if everything checks out will merge the two lists and set that as the child's family_ids field. Example: Father (family_ids: (null)) breeds with Mother (family_ids: (213, 519)) -> Child (family_ids: (213, 519)) breeds with Random Person (family_ids: (813, 712, 122, 767)) -> Grandchild (family_ids: (213, 519, 813, 712, 122, 767)) And so on and so forth... The problem I see with this is the lists becoming unreasonably large as time goes on. Solution 2 uses cakephp's associations to declare: public $belongsTo = array( 'Father' => array( 'className' => 'User', 'foreignKey' => 'father_id' ), 'Mother' => array( 'className' => 'User', 'foreignKey' => 'mother_id' ) ); Now setting recursive to 2 will fetch the results of the mother and father, along with their mother and father, and so on and so forth all the way down the line. The problem with this route is that the data is in nested arrays and I'm unsure of how to efficiently work through the code. If anyone would be able to steer me in the direction of the most efficient way to handle what I want to achieve that would be tremendously helpful. Any and all help is greatly appreciated and I'll gladly answer any questions anyone has. Thanks a lot.

    Read the article

  • Better alternative to autonumber primary keys

    - by Comrad_Durandal
    I am looking for a better primary key than the autonumber data type, namely for the reason that it's limited to a long integer, when I really just need the field to reflect a number or text string that will never ever repeat, no matter HOW many records are added or deleted from the table. The problem is I am not sure how to implement something like turning the current date and time into a hexadecimal string and using that as a unique field I can use as a primary key. Am I just being too paranoid about running out of space?

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >