Search Results

Search found 5254 results on 211 pages for 'concept analysis'.

Page 170/211 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • ploting 3d graph in matlab?

    - by lytheone
    Hello, I am currently a begineer, and i am using matlab to do a data analysis. I have a a text file with data at the first row is formatted as follow: time;wave height 1;wave height 2;....... I have column until wave height 19 and rows total 4000 rows. Data in the first column is time in second. From 2nd column onwards, it is wave height elevation which is in meter. At the moment I like to ask matlab to plot a 3d graph with time on the x axis, wave elevation on the y axis, and wave elevation that correspond to wave height number from 1 to 19, i.e. data in column 2 row 10 has a let say 8m which is correspond to wave height 1 and time at the column 1 row 10. I have try the following: clear; filename='abc.daf'; path='C:\D'; a=dlmread([path '\' filename],' ', 2, 1); [nrows,ncols]=size(a); t=a(1:nrows,1);%define t from text file for i=(1:20), j=(2:21); end wi=a(:,j); for k=(2:4000), l=k; end r=a(l,:); But everytime i use try to plot them, the for loop wi works fine, but for r=a(l,:);, the plot only either give me the last time data only but i want all data in the file to be plot. Is there a way i can do that. I am sorry as it is a bit confusing but i will be very thankful if anyone can help me out. Thank you!!!!!!!!!!

    Read the article

  • howto distinguish composition and self-typing use-cases

    - by ayvango
    Scala has two instruments for expressing object composition: original self-type concept and well known trivial composition. I'm curios what situations I should use which in. There are obvious differences in their applicability. Self-type requires you to use traits. Object composition allows you to change extensions on run-time with var declaration. Leaving technical details behind I can figure two indicators to help with classification of use cases. If some object used as combinator for a complex structure such as tree or just have several similar typed parts (1 car to 4 wheels relation) than it should use composition. There is extreme opposite use case. Lets assume one trait become too big to clearly observe it and it got split. It is quite natural that you should use self-types for this case. That rules are not absolute. You may do extra work to convert code between this techniques. e.g. you may replace 4 wheels composition with self-typing over Product4. You may use Cake[T <: MyType] {part : MyType} instead of Cake { this : MyType => } for cake pattern dependencies. But both cases seem counterintuitive and give you extra work. There are plenty of boundary use cases although. One-to-one relations is very hard to decide with. Is there any simple rule to decide what kind of technique is preferable? self-type makes you classes abstract, composition makes your code verbose. self-type gives your problems with blending namespaces and also gives you extra typing for free (you got not just a cocktail of two elements but gasoline-motor oil cocktail known as a petrol bomb). How can I choose between them? What hints are there? Update: Let us discuss the following example: Adapter pattern. What benefits it has with both selt-typing and composition approaches?

    Read the article

  • Pros/cons of reading connection string from physical file vs Application object (ASP.NET)?

    - by HaterTot
    my ASP.NET application reads an xml file to determine which environment it's currently in (e.g. local, development, production). It checks this file every single time it opens a connection to the database, in order to know which connection string to grab from the Application Settings. I'm entering a phase of development where efficiency is becoming a concern. I don't think it's a good idea to have to read a file on a physical disk ever single time I wish to access the database (very often). I was considering storing the connection string in Application["ConnectionString"]. So the code would be public static string GetConnectionString { if (Application["ConnectionString"] == null) { XmlDocument doc = new XmlDocument(); doc.Load(HttpContext.Current.Request.PhysicalApplicationPath + "bin/ServerEnvironment.xml"); XmlElement xe = (XmlElement) xnl[0]; switch (xe.InnerText.ToString().ToLower()) { case "local": connString = Settings.Default.ConnectionStringLocal; break; case "development": connString = Settings.Default.ConnectionStringDevelopment; break; case "production": connString = Settings.Default.ConnectionStringProduction; break; default: throw new Exception("no connection string defined"); } Application["ConnectionString"] = connString; } return Application["ConnectionString"].ToString(); } I didn't design the application so I figure there must have been a reason for reading the xml file every time (to change settings while the application runs?) I have very little concept of the inner workings here. What are the pros and cons? Do you think I'd see a small performance gain by implementing the function above? THANKS

    Read the article

  • How to identify ideas and concepts in a given text

    - by Nick
    I'm working on a project at the moment where it would be really useful to be able to detect when a certain topic/idea is mentioned in a body of text. For instance, if the text contained: Maybe if you tell me a little more about who Mr Balzac is, that would help. It would also be useful if I could have a description of his appearance, or even better a photograph? It'd be great to be able to detect that the person has asked for a photograph of Mr Balzac. I could take a really naïve approach and just look for the word "photo" or "photograph", but this would obviously be no good if they wrote something like: Please, never send me a photo of Mr Balzac. Does anyone know where to start with this? Is it even possible? I've looked into things like nltk, but I've yet to find an example of someone doing something similar and am still not entirely sure what this kind of analysis is called. Any help that can get me off the ground would be great. Thanks!

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • What is a good programming language/environment for Linux database applications?

    - by Dkellygb
    I could use some advice on my move from the Windows world to Linux. For my business, I have used VB6 and Microsoft Access with both Access databases and SQL server in the past. The easy to use forms, report writers and programming language were perfect for CRUD apps and analysis for our small hotel/restaurant business. After using Linux at home for some time I would like to convert our small business. Our server is already a Linux box using Samba. I am happy with the OpenOffice.org applications instead of Microsoft Office. The only thing which is holding me back is a desktop database application where I can develop the forms and reports we require. Base does not seem to be up to the job yet from my experience. I would like something like VB.Net with visual studio (express) but I would like to avoid Mono – I just don’t see the point of it. (You can correct me if I’m wrong.) But a good collection of forms, controls and a good report writer would be ideal. I have looked at web based stuff like Ruby on Rails, but I think a webserver for our 5 pc network is overkill. I don’t mind running a proper database on our Ubuntu 9.10 server. I may have exposed a few prejudices above but my mind is open. Any thoughts?

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • Combining aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using using a domain language to query the DB. The 'language' comprises of boolean expressions (or more specifically SQL like criteria) which I then 'translate' back into pure (ANSI) SQL and send to the underlying Db. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • Value objects in DDD - Why immutable?

    - by Hobbes
    I don't get why value objects in DDD should be immutable, nor do I see how this is easily done. (I'm focusing on C# and Entity Framework, if that matters.) For example, let's consider the classic Address value object. If you needed to change "123 Main St" to "123 Main Street", why should I need to construct a whole new object instead of saying myCustomer.Address.AddressLine1 = "123 Main Street"? (Even if Entity Framework supported structs, this would still be a problem, wouldn't it?) I understand (I think) the idea that value objects don't have an identity and are part of a domain object, but can someone explain why immutability is a Good Thing? EDIT: My final question here really should be "Can someone explain why immutability is a Good Thing as applied to Value Objects?" Sorry for the confusion! EDIT: To clairfy, I am not asking about CLR value types (vs reference types). I'm asking about the higher level DDD concept of Value Objects. For example, here is a hack-ish way to implement immutable value types for Entity Framework: http://rogeralsing.com/2009/05/21/entity-framework-4-immutable-value-objects. Basically, he just makes all setters private. Why go through the trouble of doing this?

    Read the article

  • How would you go about parsing markdown?

    - by John Leidegren
    You can find the syntax here. The thing is, the source that follows with the download is written in perl. Which I have no intentions of honoring. It is riddled with regex and it relies on MD5 hashes to escape certain characters. Something is just wrong about that! I'm about to hard code a parser for markdown and I'm wonder if someone had some experience with this? Edit: If you don't have anything meaningful to say about the actual parsing of markdown, spare me the time. (This might sound harsh, but yes, I'm looking for insight, not a solution i.e. third-party library). To help a bit with the answers, regex are meant to identify patterns! NOT to parse an entire grammar. That people consider doing so is foobar. If you think about markdown, it's fundamentally based around the concept of paragraphs. As such, a reasonable approach might be to split the input into paragraphs. There are many kinds of paragraphs e.g. heading, text, list, blockquote, code. The challenge is thus to identify these paragraphs and in what context they occur. I'll be back with a solution, once I find it's worthy to be shared.

    Read the article

  • Architecting ASP.net MVC App to use repositories and services

    - by zaladane
    Hello, I recently started reading about ASP.net MVC and after getting excited about the concept, i started to migrate all my webform project to MVC but i am having a hard time keeping my controller skinny even after following all the good advices out there (or maybe i just don't get it ... ). The website i deal with has Articles, Videos, Quotes ... and each of these entities have categories, comments, images that can be associated with it. I am using Linq to sql for database operations and for each of these Entities, i have a Repository, and for each repository, i create a service to be used in the controller. so i have - ArticleRepository ArticleCategoryRepository ArticleCommentRepository and the corresponding service ArticleService ArticleCategoryService ... you see the picture. The problem i have is that i have one controller for article,category and comment because i thought that having ArticleController handle all of that might make sense, but now i have to pass all of the services needed to the Controller constructor. So i would like to know what it is that i am doing wrong. Are my services not designed properly? should i create Bigger service to encapsulate smaller services and use them in my controller? or should i have an articleCategory Controller and an articleComment Controller? A page viewed by the user is made of all of that, thee article to be viewed,the comments associated with it, a listing of the categories to witch it applies ... how can i efficiently break down the controller to keep it "skinny" and solve my headache? Thank you! I hope my question is not too long to be read ...

    Read the article

  • Using MEF with exporting project that uses resources (xml) contained in the xap.

    - by JSmyth
    I'm doing a proof of concept app in SL4 using MEF and as part of the app I am importing another xap from an existing Silverlight Project and displaying it in my host project. The problem is that the existing app uses some .xml files (as content) and it uses linq2xml to load these files which are (assumed to be) bundled in the xap. When I compose the application the initalization fails because the host app doesn't contain the xml files. If I copy these xml files into the host project and run it the composition works fine. However, I need to keep the xml files in the original project. Is there a way that I can download a xap and look at it's contents for xml files and then load them into the host xap at runtime so that after the compostion takes place the xml resources that are required can be found? Or should I work out some kind of contract with an import/export to pass the xml files to the host xap? As the people developing the imported xaps (should the project go ahead) are from a different company, I would like to keep changes to the way they develop their apps to a minimum.

    Read the article

  • Reconstructing trees from a "fingerprint"

    - by awshepard
    I've done my SO and Google research, and haven't found anyone who has tackled this before, or at least, anyone who has written about it. My question is, given a "universal" tree of arbitrary height, with each node able to have an arbitrary number of branches, is there a way to uniquely (and efficiently) "fingerprint" arbitrary sub-trees starting from the "universal" tree's root, such that given the universal tree and a tree's fingerprint, I can reconstruct the original tree? For instance, I have a "universal" tree (forgive my poor illustrations), representing my universe of possibilities: Root / / / | \ \ ... \ O O O O O O O (Level 1) /|\/|\...................\ (Level 2) etc. I also have tree A, a rooted subtree of my universe Root / /|\ \ O O O O O / Etc. Is there a way to "fingerprint" the tree, so that given that fingerprint, and the universal tree, I could reconstruct A? I'm thinking something along the lines of a hash, a compression, or perhaps a functional/declarative construction? Big-O analysis (in time or space) is a plus. As a for-instance, a nested expression like: {{(Root)},{(1),(2),(3)},{(2,3),(1),(4,5)}...} representing the actual nodes present at each level in the tree is probably valid, but can it be done more efficiently?

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Putting Select Statement on Hibernate Transaction

    - by Mark Estrada
    Hi All, I have been reading the net for a while regarding Hibernate but I can seem to understand one concept regarding Transaction. On some site that I have visit, Select statements are in transaction mode like this. public List<Book> readAll() { Session session = HibernateUtil.getSessionFactory() .getCurrentSession(); session.beginTransaction(); List<Book> booksList = session.createQuery("from Book").list(); session.getTransaction().commit(); return booksList; } While on some site, it does not advocate the use of transaction on Select statements public List<Book> readAll() { Session session = HibernateUtil.getSessionFactory() .getCurrentSession(); List<Book> booksList = session.createQuery("from Book").list(); return booksList; } I am thinking which one should I follow. Any thoughts please? Are transactions needed on Select Statements or not? Thanks

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • Partial Classes - are they bad design?

    - by dferraro
    Hello, I'm wondering why the 'partial class' concept even exists in .NET. I'm working on an application and we are reading a (actually very good) book relavant to the development platform we are implementing at work. In the book he provides a large code base /wrapper around the platform API and explains how he developed it as he teaches different topics about the platform development. Anyway, long story short - he uses partial classes, all over the place, as a way to fake multiple inheritence in C# (IMO). Why he didnt just split the classes up into multiple ones and use composition is beyond me. He will have 3 'partial class' files to make up his base class, each w/ 3-500 lines of code... And does this several times in his API. Do you find this justifiable? If it were me, I'd have followed the S.R.P. and created multiple classes to handle different required behaviors, then create a base class that has instances of these classes as members (e.g. composition). Why did MS even put partial class into the framework?? They removed the ability to expand/collapse all code at each scope level in C# (this was allowed in C++) because it was obviously just allowing bad habits - partial class is IMO the same thing. I guess my quetion is: Can you explain to me when there would be a legitimate reason to ever use a partial class? I do not mean this to be a rant / war thread. I'm honeslty looking to learn something here. Thanks

    Read the article

  • Using Silverlight for Views in ASP.Net MVC - a bad idea?

    - by bplus
    I'm currently writing a small application for use internally at my office. I started out teaching myself some MVC (I've been a C# dev for 3 years). One of the main requirements is editable grids - I quickly realised that silverlight (i have zero silverlight experience) could be a big help in this. I've managed to create a proof of concept of getting MVC and silverlight to talk back an forth by combining these two techniques: Creating a Rest API using MVC MVC SilverLight I also got some help on stackoverflow: silverlight-grids-mvc-http-post Essentially all I'm doing is embedding a silver light object in a view. Serializing the Model data as JSON and passing it to silverlight(using intit params written into the response). The silverlight object can post data back to the controller as JSON. So far this seems like it could work quite well. However I am a bit concerned that I could be painting myself into a corner with this approach, as in I don't have much experience with either technology so I'm worried I'm going get hit with something further down the line that I won't be able to work around. Has anybody else tried doing this? Any advice would be much appreciated!

    Read the article

  • Best ways to copyright protect your work for example Myows

    - by Saif Bechan
    Recently I have read about Myows, they say its: "The universal copyright management and protection app for smart creatives" It is used to protect your application from copyrights and more. Do you think this will be a good idea for large application, or are there better ways to achieve such a thing. url: Myows Update Referencing: http://stackoverflow.com/questions/2618015/has-anyone-tried-myows-to-copyright-protect-your-work/2618628#2618628 Wow there is no better person to have answered this question like the creator himself. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so 'new'. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it will works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • Diamond problem in C++

    - by Jack
    I know the diamond problem. I am using gcc compiler. I have some scenarios I need explanation about. 1) class A{ public: virtual void eat(){cout<<"A eat\n";} }; class B:public A{ public: void eat(){ cout<<"B eat\n";}}; class C:public A{ public: void eat(){ cout<<"C eat\n";}}; class D:public B,C{ public: void eat(){ cout<<"D eat\n";}}; int main() { A * a = new D(); a->eat(); getch(); return 0; } Why doesn't this work? 2) class A{ public: void eat(){cout<<"A eat\n";} }; class B:virtual public A{ public: void eat(){ cout<<"B eat\n";}}; class C:virtual public A{ public: void eat(){ cout<<"C eat\n";}}; class D: public B,C{ public: void eat(){ cout<<"D eat\n";}}; int main() { A * a = new D(); a->eat(); getch(); return 0; } When I do this what happens in the background. How does the ambiguity get removed. Is the concept of vtables involved here?

    Read the article

  • Prevent Rails link_to_remote multiple submits w Javascript

    - by Chris
    In a Rails project I need to keep a link_to_remote from getting double-clicked. It looks like :before and :after are my only choices - they get prepended/appended to the onclick Ajax call, respectively. But if I try something like: :before => "self.stopObserving()" t,he Ajax is never run. If I try it for :after the Ajax is run but the link never stops observing. The solutions I've seen rely on creating a variable and blocking the whole form, but there are multiple link_to_remote rows on this page and it is valid to click more than one of them at a time - just not the same one twice. One variable per row declared outside of link_to_remote seems very kludgey... Instead of using Prototype I originally tried plain Javascript first for this proof of concept - but it fails too: <a href="#" onclick="self.onclick = function(){alert('foo');};"click</a just puts up an alert when clicked - the lambda here does nothing? This next one is more like the desired goal and should only alert the first time. But instead it alerts every time: <a href="#" onclick="alert('bar'); self.onclick = function(){return false;};"click</a All ideas appreciated!

    Read the article

  • Webcrawler, feedback?

    - by Jan Kuboschek
    Hey folks, every once in a while I have the need to automate data collection tasks from websites. Sometimes I need a bunch of URLs from a directory, sometimes I need an XML sitemap (yes, I know there is lots of software for that and online services). Anyways, as follow up to my previous question I've written a little webcrawler that can visit websites. Basic crawler class to easily and quickly interact with one website. Override "doAction(String URL, String content)" to process the content further (e.g. store it, parse it). Concept allows for multi-threading of crawlers. All class instances share processed and queued lists of links. Instead of keeping track of processed links and queued links within the object, a JDBC connection could be established to store links in a database. Currently limited to one website at a time, however, could be expanded upon by adding an externalLinks stack and adding to it as appropriate. JCrawler is intended to be used to quickly generate XML sitemaps or parse websites for your desired information. It's lightweight. Is this a good/decent way to write the crawler, provided the limitations above? http://pastebin.com/VtgC4qVE - Main.java http://pastebin.com/gF4sLHEW - JCrawler.java http://pastebin.com/VJ1grArt - HTMLUtils.java Thanks for your feedback in advance! :)

    Read the article

  • HATEOAS - Discovery and URI Templating

    - by Paul Kirby
    I'm designing a HATEOAS API for internal data at my company, but have been having troubles with the discovery of links. Consider the following set of steps for someone to retrieve information about a specific employee in this system: User sends GET to http://coredata/ to get all available resources, returns a number of links including one tagged as rel = "http://coredata/rels/employees" User follows HREF on the rel from the first request, performing a GET at (for example) http://coredata/employees The data returned from this last call is my conundrum and a situation where I've heard mixed suggestions. Here are some of them: That GET will return all employees (with perhaps truncated data), and the client would be responsible for picking the one it wants from that list. That GET would return a number of URI templated links describing how to query / get one employee / get all employees. Something like: "_links": { "http://coredata/rels/employees#RetrieveOne": { "href": "http://coredata/employees/{id}" }, "http://coredata/rels/employees#Query": { "href": "http://coredata/employees{?login,firstName,lastName}" }, "http://coredata/rels/employees#All": { "href": "http://coredata/employees/all" } } I'm a little stuck here with what remains closest to HATEOAS. For option 1, I really do not want to make my clients retrieve all employees every time for the sake of navigation, but I can see how using URI templating in example two introduces some out-of-band knowledge. My other thought was to use the RetrieveOne, Query, and All operations as my cool URLs, but that seems to violate the concept that you should be able to navigate to the resources you want from one base URI. Has anyone else managed to come up with a good way to handle this? Navigation is dead simple once you've retrieved one resource or a set of resources, but it seems very difficult to use for discovery.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >