Search Results

Search found 21481 results on 860 pages for 'oracle business intellige'.

Page 840/860 | < Previous Page | 836 837 838 839 840 841 842 843 844 845 846 847  | Next Page >

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • added div does not recognize jquery function

    - by zurna
    I pull categories of one section from a XML. Problem is, pulled categories need to recognize tab plugin but they do not recognize it. I am not sure if I made sense, please let me know if I didnt make any sense. I will try to explain it again. :) $.ajax({ dataType: "xml", url: "/FLPM/content/home/index.cs.asp?Process=ViewVCategories", success: function(xml) { $(xml).find('row').each(function(){ var id = $(this).attr('id'); var CategoryName = $(this).find('CategoryName').text(); $("<div class='tab fleft'><a href='#'>"+ CategoryName + "</a></div>").appendTo("#VCategories"); }); } }); $("div.row-title").tabs("div.panes > div", {effect: 'ajax'}, function(i) { // get the pane to be opened var pane = this.getPanes().eq(i); // load it with a page specified in the tab's href attribute pane.html('<img src="http://www.refinethetaste.com/html/cp/images/loading.gif" alt="Loading..." />') .load(this.getTabs().eq(i).attr("href")); }); pulled categories displayed here: <div class="row-title clear" id="VCategories"> categories xml <rows> - <row id="1"> <CategoryName>Nation</CategoryName> </row> - <row id="2"> <CategoryName>Politics</CategoryName> </row> - <row id="3"> <CategoryName>Health</CategoryName> </row> - <row id="4"> <CategoryName>Business</CategoryName> </row> - <row id="5"> <CategoryName>Culture</CategoryName> </row> </rows> </div>

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • Complex relationship between tables in NHibernate

    - by Ilya Kogan
    Hi all, I'm writing a Fluent NHibernate mapping for a legacy Oracle database. The challenge is that the tables have composite primary keys. If I were at total freedom, I would redesign the relationships and auto-generate primary keys, but other applications must write to the same database and read from it, so I cannot do it. These are the two tables I'll focus on: Example data Trips table: 1, 10:00, 11:00 ... 1, 12:00, 15:00 ... 1, 16:00, 19:00 ... 2, 12:00, 13:00 ... 3, 9:00, 18:00 ... Faults table: 1, 13:00 ... 1, 23:00 ... 2, 12:30 ... In this case, vehicle 1 made three trips and has two faults. The first fault happened during the second trip, and the second fault happened while the vehicle was resting. Vehicle 2 had one trip, during which a fault happened. Constraints Trips of the same vehicle never overlap. So the tables have an optional one-to-many relationship, because every fault either happens during a trip or it doesn't. If I wanted to join them in SQL, I would write: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime and then I'd get a dataset where every fault appears exactly once (one-to-many as I said). Note that there is no Vehicles table, and I don't need one. But I did create a view that contains all VehicleIds from both tables, so I can use it as a junction table. What am I actually looking for? The tables are huge because they cover years of data, and every time I only need to fetch a range of a few hours. So I need a mapping and a criteria that will run something like the following SQL underneath: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime where Faults.FaultTime between :p0 and :p1 Do you have any ideas how to achieve it? Note 1: Currently the application shouldn't write to the database, so persistence is not a must, although if the mapping supports persistence, it may help at some point in the future. Note 2: I know it's a tough one, so if you give me a great answer, you will be properly rewarded :) Thank you for reading this long question, and now I only hope for the best :)

    Read the article

  • paypal ipn working but stopping at 'thank you' page.

    - by Tarique Imam
    I am using the code for controller(CODEIGNITER): function paypal_tran(){ if (empty($_GET['action'])){ $_GET['action'] = 'process';} if($this-uri-segment ( 3 )){ $action=$this-uri-segment ( 3 ); } else{ $action='process'; } $ammount=39.99; $this-lenders_model-paypal_process($action,$this_script,$ammount); $this-load-view('view_paypal_tran'); } function ipn(){ if ($this->paypal_class->validate_ipn()) { $data = array( 'fname'=> 'fname', /* insert the user id */ 'lname'=>'lname' ); //$this->db->insert('ajax_test',$data); // For this example, we'll just email ourselves ALL the data. $subject = 'Instant Payment Notification - Recieved Payment'; $to = '[email protected]'; // your email $body = "An instant payment notification was successfully recieved\n"; $body .= "from ".$p->ipn_data['payer_email']." on ".date('m/d/Y'); $body .= " at ".date('g:i A')."\n\nDetails:\n"; foreach ($this->paypal_class->ipn_data as $key => $value) { $body .= "\n$key: $value"; } mail($to, $subject, $body); } } function success() { $this->load->view('paypal_succ_view'); } AND this is my model: function paypal_process($action,$this_script,$ammount){ switch ($action) { case 'process': // Process and order... // There should be no output at this point. To process the POST data, // the submit_paypal_post() function will output all the HTML tags which // contains a FORM which is submited instantaneously using the BODY onload // attribute. In other words, don't echo or printf anything when you're // going to be calling the submit_paypal_post() function. // This is where you would have your form validation and all that jazz. // You would take your POST vars and load them into the class like below, // only using the POST values instead of constant string expressions. // For example, after ensureing all the POST variables from your custom // order form are valid, you might have: // // $p->add_field('first_name', $_POST['first_name']); // $p->add_field('last_name', $_POST['last_name']); $this->paypal_class->add_field('business', '[email protected]'); $this->paypal_class->add_field('return', $this_script.'/success'); $this->paypal_class->add_field('cancel_return', $this_script.'/cancel'); $this->paypal_class->add_field('notify_url', $this_script.'/ipn'); $this->paypal_class->add_field('item_name', 'Lenders Account for one month'); $this->paypal_class->add_field('amount', $ammount); $this->paypal_class->submit_paypal_post(); // submit the fields to paypal $this->paypal_class->dump_fields(); // for debugging, output a table of all the fields break; PROBLEM IS IPN IS NOT WORKING. THE HIDDEN FIELD HAS VALUE FOR REDIRECT TO IPN, BUT NOT WORKING!!PLS HELP

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • What are the benefits of using ORM over XML Serialization/Deserialization?

    - by Tequila Jinx
    I've been reading about NHibernate and Microsoft's Entity Framework to perform Object Relational Mapping against my data access layer. I'm interested in the benefits of having an established framework to perform ORM, but I'm curious as to the performance costs of using it against standard XML Serialization and Deserialization. Right now, I develop stored procedures in Oracle and SQL Server that use XML Types for either input or output parameters and return or shred XML depending on need. I use a custom database command object that uses generics to deserialize the XML results into a specified serializable class. By using a combination of generics, xml (de)serialization and Microsoft's DAAB, I've got a process that's fairly simple to develop against regardless of the data source. Moreover, since I exclusively use Stored Procedures to perform database operations, I'm mostly protected from changes in the data structure. Here's an over-simplified example of what I've been doing. static void main() { testXmlClass test = new test(1); test.Name = "Foo"; test.Save(); } // Example Serializable Class ------------------------------------------------ [XmlRootAttribute("test")] class testXmlClass() { [XmlElement(Name="id")] public int ID {set; get;} [XmlElement(Name="name")] public string Name {set; get;} //create an instance of the class loaded with data. public testXmlClass(int id) { GenericDBProvider db = new GenericDBProvider(); this = db.ExecuteSerializable("myGetByIDProcedure"); } //save the class to the database... public Save() { GenericDBProvider db = new GenericDBProvider(); db.AddInParameter("myInputParameter", DbType.XML, this); db.ExecuteSerializableNonQuery("mySaveProcedure"); } } // Database Handler ---------------------------------------------------------- class GenericDBProvider { public T ExecuteSerializable<T>(string commandText) where T : class { XmlSerializer xml = new XmlSerializer(typeof(T)); // connection and command code is assumed for the purposes of this example. // the final results basically just come down to... return xml.Deserialize(commandResults) as T; } public void ExecuteSerializableNonQuery(string commandText) { // once again, connection and command code is assumed... // basically, just execute the command along with the specified // parameters which have been serialized. } public void AddInParameter(string name, DbType type, object value) { StringWriter w = new StringWriter(); XmlSerializer x = new XmlSerializer(value.GetType()); //handle serialization for serializable classes. if (type == DbType.Xml && (value.GetType() != typeof(System.String))) { x.Serialize(w, value); w.Close(); // store serialized object in a DbParameterCollection accessible // to my other methods. } else { //handle all other parameter types } } } I'm starting a new project which will rely heavily on database operations. I'm very curious to know whether my current practices will be sustainable in a high-traffic situation and whether or not I should consider switching to NHibernate or Microsoft's Entity Framework to perform what essentially seems to boil down to the same thing I'm currently doing. I appreciate any advice you may have.

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • How to work with file and streams in php,case: if we open file in Class A and pass open stream to Cl

    - by Rachel
    I have two class, one is Business Logic Class{BLO} and the other one is Data Access Class{DAO} and I have dependency in my BLO class to my Dao class. Basically am opening a csv file to write into it in my BLO class using inside its constructor as I am creating an object of BLO and passing in file from command prompt: Code: $this->fin = fopen($file,'w+') or die('Cannot open file'); Now inside BLO I have one function notifiy, which call has dependency to DAO class and call getCurrentDBSnapshot function from the Dao and passes the open stream so that data gets populated into the stream. Code: Blo Class Constructor: public function __construct($file) { //Open Unica File for parsing. $this->fin = fopen($file,'w+') or die('Cannot open file'); // Initialize the Repository DAO. $this->dao = new Dao('REPO'); } Blo Class method that interacts with Dao Method and call getCurrentDBSnapshot. public function notifiy() { $data = $this->fin; var_dump($data); //resource(9) of type (stream) $outputFile=$this->dao->getCurrentDBSnapshot($data); // So basically am passing in $data which is resource((9) of type (stream) } Dao function: getCurrentDBSnapshot which get current state of Database table. public function getCurrentDBSnapshot($data) { $query = "SELECT * FROM myTable"; //Basically just preparing the query. $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); $header = array(); while ($row=$stmt->fetch(PDO::FETCH_ASSOC)) { if(empty($header)) { // Get the header values from table(columnnames) $header = array_keys($row); // Populate csv file with header information from the mytable fputcsv($data, $header); } // Export every row to a file fputcsv($data, $row); } var_dump($data);//resource(9) of type (stream) return $data; } So basically in am getting back resource(9) of type (stream) from getCurrentDBSnapshot and am storing that stream into $outputFile in Blo class method notify. Now I want to close the file which I opened for writing and so it should be fclose of $outputFile or $data, because in both the cases it gives me: var_dump(fclose($outputFile)) as bool(false) var_dump(fclose($data)) as bool(false) and var_dump($outputFile) as resource(9) of type (Unknown) var_dump($data) as resource(9) of type (Unknown) My question is that if I open file using fopen in class A and if I call class B method from Class A method and pass an open stream, in our case $data, than Class B would perform some work and return back and open stream and so How can I close that open stream in Class A's method or it is ok to keep that stream open and not use fclose ? Would appreciate inputs as am not very sure as how this can be implemented.

    Read the article

  • Inference engine to calculate matching set according to internal rules

    - by Zecrates
    I have a set of objects with attributes and a bunch of rules that, when applied to the set of objects, provides a subset of those objects. To make this easier to understand I'll provide a concrete example. My objects are persons and each has three attributes: country of origin, gender and age group (all attributes are discrete). I have a bunch of rules, like "all males from the US", which correspond with subsets of this larger set of objects. I'm looking for either an existing Java "inference engine" or something similar, which will be able to map from the rules to a subset of persons, or advice on how to go about creating my own. I have read up on rule engines, but that term seems to be exclusively used for expert systems that externalize the business rules, and usually doesn't include any advanced form of inferencing. Here are some examples of the more complex scenarios I have to deal with: I need the conjunction of rules. So when presented with both "include all males" and "exclude all US persons in the 10 - 20 age group," I'm only interested in the males outside of the US, and the males within the US that are outside the 10 - 20 age group. Rules may have different priorities (explicitly defined). So a rule saying "exclude all males" will override a rule saying "include all US males." Rules may be conflicting. So I could have both an "include all males" and an "exclude all males" in which case the priorities will have to settle the issue. Rules are symmetric. So "include all males" is equivalent to "exclude all females." Rules (or rather subsets) may have meta rules (explicitly defined) associated with them. These meta rules will have to be applied in any case that the original rule is applied, or if the subset is reached via inferencing. So if a meta rule of "exclude the US" is attached to the rule "include all males", and I provide the engine with the rule "exclude all females," it should be able to inference that the "exclude all females" subset is equivalent to the "include all males" subset and as such apply the "exclude the US" rule additionally. I can in all likelihood live without item 5, but I do need all the other properties mentioned. Both my rules and objects are stored in a database and may be updated at any stage, so I'd need to instantiate the 'inference engine' when needed and destroy it afterward.

    Read the article

  • Combinations and Permutations in F#

    - by Noldorin
    I've recently written the following combinations and permutations functions for an F# project, but I'm quite aware they're far from optimised. /// Rotates a list by one place forward. let rotate lst = List.tail lst @ [List.head lst] /// Gets all rotations of a list. let getRotations lst = let rec getAll lst i = if i = 0 then [] else lst :: (getAll (rotate lst) (i - 1)) getAll lst (List.length lst) /// Gets all permutations (without repetition) of specified length from a list. let rec getPerms n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, _ -> lst |> getRotations |> Seq.collect (fun r -> Seq.map ((@) [List.head r]) (getPerms (k - 1) (List.tail r))) /// Gets all permutations (with repetition) of specified length from a list. let rec getPermsWithRep n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, _ -> lst |> Seq.collect (fun x -> Seq.map ((@) [x]) (getPermsWithRep (k - 1) lst)) // equivalent: | k, _ -> lst |> getRotations |> Seq.collect (fun r -> List.map ((@) [List.head r]) (getPermsWithRep (k - 1) r)) /// Gets all combinations (without repetition) of specified length from a list. let rec getCombs n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, (x :: xs) -> Seq.append (Seq.map ((@) [x]) (getCombs (k - 1) xs)) (getCombs k xs) /// Gets all combinations (with repetition) of specified length from a list. let rec getCombsWithRep n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, (x :: xs) -> Seq.append (Seq.map ((@) [x]) (getCombsWithRep (k - 1) lst)) (getCombsWithRep k xs) Does anyone have any suggestions for how these functions (algorithms) can be sped up? I'm particularly interested in how the permutation (with and without repetition) ones can be improved. The business involving rotations of lists doesn't look too efficient to me in retrospect. Update Here's my new implementation for the getPerms function, inspired by Tomas's answer. Unfortunately, it's not really any fast than the existing one. Suggestions? let getPerms n lst = let rec getPermsImpl acc n lst = seq { match n, lst with | k, x :: xs -> if k > 0 then for r in getRotations lst do yield! getPermsImpl (List.head r :: acc) (k - 1) (List.tail r) if k >= 0 then yield! getPermsImpl acc k [] | 0, [] -> yield acc | _, [] -> () } getPermsImpl List.empty n lst

    Read the article

  • Choosing the right .NET architecture. WCF? WPF/Forms, ASP.NET (MVC)?

    - by Tommy Jakobsen
    I’m in the situation that I have to design and implement a rather big system from the bottom. I’m having some (a lot actually) questions about the architecture that I would like your comments and thoughts on. I don’t hope that I’ve written too much here, but I wanted to give you all an idea of what the system is. Quick info about the applications, read if you want: I can’t share much detail about the project, but basically it’s a system where we offer our customer a service to manage their users. We have a hotline where the users call and our hotline uses an (windows) application (intranet) to manage the user’s data, etc. The customer also has a web application where they can see reports, information about their business and users, and the ability to modify their data. Modifying data is not just user data like address and so, but also information about the products/services the user has, which can be complicated. The applications will be built on Microsoft .NET Framework 4, with a MS SQL Server 2008 database. There will be a few applications that have to access this database, such as: Intranet application (used by us and our hotline) Customer web application type 1 Customer web application type 2 Customer web application type n different applications) … Now my big problem is what .NET parts I should use for such a system. For the “backend” I’ve considered using Windows Communication Foundation: Would WCF be a good choice? The intranet application will be an application that has to edit lots of records in the database. It has to be easy to navigate using the keyboard (fast to work with). Has a feature such as “find customer, find that, lookup this, choose this and update that”. What would be the best choice to develop this application in? Would it be WPF or good old Windows Forms? I don’t need all of the fancy graphics features in WPF, like 3D, but the application has to look nice (maybe something like the new Visual Studio/Office tools). And the same question goes for the web pages. They have much the same work to do, but not as many features as the intranet application, and not the same amount of data (much less). That is my questions for now. I’m hoping to get a discussion going that will open my eyes to some of these technologies, helping me decide architecture to go with. I would like to say thanks in advance, and let you all know that any thoughts will be much appreciated.

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • A SelfHosted WCF Service over Basic HTTP Binding doesn't support more than 1000 concurrent requests

    - by Krishnan
    I have self hosted a WCF Service over BasicHttpBinding consumed by an ASMX Client. I'm simulating a concurrent user load of 1200 users. The service method takes a string parameter and returns a string. The data exchanged is less than 10KB. The processing time for a request is fixed at 2 seconds by having a Thread.Sleep(2000) statement. Nothing additional. I have removed all the DB Hits / business logic. The same piece of code runs fine for 1000 concurrent users. I get the following error when I bump up the number to 1200 users. System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at WCF.Throttling.Client.Service.Function2(String param) This exception is often reported on DataContract mismatch and large data exchange. But never when doing a load test. I have browsed enough and have tried most of the options which include, Enabled Trace & Message log on server side. But no errors logged. To overcome Port Exhaustion MaxUserPort is set to 65535, and TcpTimedWaitDelay 30 secs. MaxConcurrent Calls is set to 600, and MaxConcurrentInstances is set to 1200. The Open, Close, Send and Receive Timeouts are set to 10 Minutes. The HTTPWebRequest KeepAlive set to false. I have not been able to nail down the issue for the past two days. Any help would be appreciated. Thank you.

    Read the article

  • How to limit results in a SharePoint XSL query

    - by David
    Hello all, I am creating a SharePoint site that we will use to report issues with trucks used in our business. Linked to the list I have created will be a page that will display an overview of the trucks and a little truck icon will show the trucks current status. Green and the truck is okay (no open issues), Red and the truck have an open issue with status "Undrivable", Orange and there is two issues open that requires the user to look further into the truck before using it and finally a Gray truck for when there is a new issue created that has not been looked into (not sure if it is drivable or not). I have managed to create the "Dashboard" and with my limit XSL/XPATH knowledge been able to add a truck and replicate the description above but... in my test I have created 4 issues, for example if three of them are changed to status Closed and one left to Undrivable I will get four icons on the page, three with Green trucks and the last one Red. So in theory it works but I obviously only want to see the last truck, one truck. I am not interested in seeing the others. <xsl:template name="dvt_1.rowview"> <xsl:variable name="CountReport" select="count(/dsQueryResponse/Rows/Row[@Highloader='GGEU12' and @Status!='Closed'])" /> <xsl:variable name="MoreThan" select="$CountReport &gt; 1" /> <xsl:variable name="NoReports" select="$CountReport = 0" /> <xsl:variable name="Closed" select=" @Highloader='GGEU12' and @Status='Closed'" /> <xsl:choose> <xsl:when test="$MoreThan"> <div class="ms-vb"><img title='More than one report exist!' border='0' alt='In Progress' src='highloader/Library/hl-orange.png' /></div> </xsl:when> <xsl:otherwise> <div class="ms-vb"><xsl:value-of disable-output-escaping="yes" select="@Icon" /></div> </xsl:otherwise> </xsl:choose> </xsl:template> My hope is that someone with slightly more knowledge can find the last piece of the puzzle for me! Thanks for reading and asking questions to fill any gap I left above. David

    Read the article

  • Grouping geographical shapes

    - by grenade
    I am using Dundas Maps and attempting to draw a map of the world where countries are grouped into regions that are specific to a business implementation. I have shape data (points and segments) for each country in the world. I can combine countries into regions by adding all points and segments for countries within a region to a new region shape. foreach(var region in GetAllRegions()){ var regionShape = new Shape { Name = region.Name }; foreach(var country in GetCountriesInRegion(region.Id)){ var countryShape = GetCountryShape(country.Id); regionShape.AddSegments(countryShape.ShapeData.Points, countryShape.ShapeData.Segments); } map.Shapes.Add(regionShape); } The problem is that the country border lines still show up within a region and I want to remove them so that only regional borders show up. Dundas polygons must start and end at the same point. This is the case for all the country shapes. Now I need an algorithm that can: Determine where country borders intersect at a regional border, so that I can join the regional border segments. Determine which country borders are not regional borders so that I can discard them. Sort the resulting regional points so that they sequentialy describe the shape boundaries. Below is where I have gotten to so far with the map. You can see that the country borders still need to be removed. For example, the border between Mongolia and China should be discarded whereas the border between Mongolia and Russia should be retained. The reason I need to retain a regional border is that the region colors will be significant in conveying information but adjacent regions may be the same color. The regions can change to include or exclude countries and this is why the regional shaping must be dynamic. EDIT: I now know that I what I am looking for is a UNION of polygons. David Lean explains how to do it using the spatial functions in SQL Server 2008 which might be an option but my efforts have come to a halt because the resulting polygon union is so complex that SQL truncates it at 43,680 characters. I'm now trying to either find a workaround for that or find a way of doing the union in code.

    Read the article

  • How to catch 'exceptions' for out of order execution in Workflow Foundation 4?

    - by Alex Key
    Hi, I am attempting to model a worklfow using a "WCF Workflow Service" in .net / vs 2010 that needs to handle out of order execution gracefully (but not allow it - if thath makes sense!?) For example I have 2 receive activities one called Initialize and the other called GetValue inside a FlowChart. In most cases Initialize should be called first and GetValue after (as modled in the flow chart). However if GetValue is executed before Initialize I do not want to return an "out of order" exception (although when I look at the WCF test client, I can't actually see an exception). But instead a custom exception saying something like "you must initialize first". In theory I could model this with lots of parallel activities and conditions to check if Initialized / Running / Terminated etc. But the business process I am modelling if very very similar to a state machine... except it must handle people executing things in the wrong order. Ideally I would like to catch the "out of order" exception (thought I don't think it's really an exception as such), check the 'exception' to see which function was attempted to run and then handle it. I have done some research around enabling AllowBufferedReceive. However I don't want to be able to execute out of order (I don't think), but instead give a detailed response if it does happen. I've looked at the new beta state machine template for WF 4 - but i'm not sure if it does what i'm after? I'm not sure if I have the wrong end of the stick, so any help would be greatly appreciated. [EDIT] To help clarify... Sorry it's a tricky one to explain. The standard I am trying to implement (the e-learning standard SCORM RTE) is structured like a state machine i.e. certain functions can only be executed in certain states. However the standard specifies that if the calling clients tries to execute a function that it is not meant to, then a warning should be issued... for example "you cannot use GetValue(), because you have not yet Initialized". Ideally I'd like to structure the workflow as the theoretical state machine and not need to have to use multiple if/else's to handle all the scenarios where something could be executed out-of-order. I'd like to catch a out-of-order exception (but I don't think there is such an exception - as it's not in the debugger) and rethrow it.

    Read the article

  • WTF is wtf? (in WebKit code base)

    - by Motti
    I downloaded Chromium's code base and ran across the WTF namespace. namespace WTF { /* * C++'s idea of a reinterpret_cast lacks sufficient cojones. */ template<typename TO, typename FROM> TO bitwise_cast(FROM in) { COMPILE_ASSERT(sizeof(TO) == sizeof(FROM), WTF_wtf_reinterpret_cast_sizeof_types_is_equal); union { FROM from; TO to; } u; u.from = in; return u.to; } } // namespace WTF Does this mean what I think it means? Could be so, the bitwise_cast implementation specified here will not compile if either TO or FROM is not a POD and is not (AFAIK) more powerful than C++ built in reinterpret_cast. The only point of light I see here is the nobody seems to be using bitwise_cast in the Chromium project. I see there's some legalese so I'll put in the little letters to keep out of trouble. /* * Copyright (C) 2008 Apple Inc. All Rights Reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */

    Read the article

  • Executing a .NET Managed Assembly from SQL Server 2008 - Pro's, Con's & Recommendations

    - by RPM1984
    Hi guys, looking for opinions/recommendations/links for the following scenario im currently facing. The Platform: .NET 4.0 Web Application SQL Server 2008 The Task: Overhaul a component of the system that performs (fairly) complex mathematical operations based on a specific user activity, and updates numerous tables in the database. A common user activity might be "Bob" decides to post a forum topic. This results in (the end-solution) needing to look at various factors (about the post he did), then after doing some math based on lookup values/ratios as well as other data in the database, inserting some other data as a result of these operations. The Options: Ok - so here's what im thinking. Although it would be much easier to do this in C# (LINQ-SQL) it doesnt make much sense as the majority of the computations are based on values in the db, and it will get difficult to control/optimize/debug the LINQ over time. Hence, im leaning towards created a managed assembly (C# Class Library) that contains the lookup values (constants) as well as leveraging the math classes in the existing .NET BCL. Basically i'd expose a few methods that can be called by the T-SQL Stored Procedures. This to me has the following advantages: Simplicity of math. Do complex math in .NET vs complex math in T-SQL. No brainer. =) Abstraction of computatations, configurable "lookup" values and business logic from raw T-SQL. T-SQL only needs to care about the data, simplifying the stored procedures and making it easier to maintain. When it needs to do math it delegates off to the managed assembly. So, having said that - ive never done this before (call .NET assmembly from T-SQL), and after some googling the best site i could come up with is here, which is useful but outdated. So - what am i asking? Well, firstly - i need some better references on how to actually do this. "This" being how to call a C# .NET 4 Assembly from within T-SQL Stored Procedures in SQL Server 2008. Secondly, who out there has done this, what problems (if any) did you face? Realize this may be difficult to provide a "correct answer", so ill try to give it to whoever gives me the answer with a combination of good links and a list of pro's/con's/problems with this implementation. Cheers!

    Read the article

  • JavaScript accessing form elements using document.forms[].elements[]

    - by thecoshman
    var loc_name = document.forms['create_<?php echo(htmlspecialchars($window_ID)); ?>'].elements['location_name']; alert(loc_name); This just gives me the message 'undefined' where as... var loc_name = document.forms['create_<?php echo(htmlspecialchars($window_ID)); ?>']; alert(loc_name); Gives me the object form business. Have I just got this all wrong? What is the 'proper' way to access this form element. The form element has the correct name and it has an id, the id is similar but not the same. ** HTML - as reuqested ** <form action="javascript:void(0)" name="create_<?php echo(htmlspecialchars($window_ID)); ?>" method="GET" onsubmit="return false"> <td> <input type="text" id="location_name_<?php echo($window_ID); ?>" name="location_name" value="Enter a name" onfocus="if(this.value == 'Enter a name'){this.value = ''}" onblur="if(this.value == ''){ this.value = 'Enter a name' }" /> </td> <td colspan="2"> <input type="button" name="create_location" value="Create" onclick="var pre_row_was = $('#pre_form_row_<?php echo($window_ID); ?>').innerHTML; $('#pre_form_row_<?php echo($window_ID); ?>').innerHTML = '<td colspan=\'3\'>Validating...</td>'; var loc_name = document.forms['create_<?php echo(htmlspecialchars($window_ID)); ?>'].elements['location_name']; alert(loc_name); if(loc_name.value == '') { alert('You can\'t leave the room name blank'); loc_name.focus(); loc_name.value = 'Enter a name'; $('#pre_form_row_<?php echo($window_ID); ?>').innerHTML = pre_row_was; return false; } if(loc_name.value == 'Enter a name') { alert('You must enter a room name first'); loc_name.focus(); $('#pre_form_row_<?php echo($window_ID); ?>').innerHTML = pre_row_was; return false; } $('#pre_form_row_<?php echo($window_ID); ?>').innerHTML = pre_row_was; Window_manager.new_window().load_xml('location/create.php?location_name=' + loc_name.value).display();" /> </td> </form> ugly as sin that is...

    Read the article

  • Repeating fields in similar database tables

    - by user1738833
    I have been tasked with working on a database that I have never seen before and I'm looking at the DB structure. Some of the central and most heavily queried and joined tables look like virtual duplicates of each other. Here's a massively simplified representation of the situation, with business-sensitive information changed, listing hypothetical table names and fields: TopLevelGroup: PK_TLGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubGroup: PK_SubGroupId, FK_ParentTopLevelGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubSubGroup: PK_SubSUbGroupId, FK_ParentSubGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK I haven't listed the types of the fields as I don't think it's particularly important to the situation. In addition, it's worth saying that rather than four repeated fields as in the example above, I'm looking at 86 repeated fields. For the most part, those fields genuinely do represent "facts" about the primary table entity, so it's not automatically wrong for that reason. In addition, the "groups" represented here have a property inheritance relationship. If DisplaysXOnBill is NULL in the SubSubGroup, it takes the value of DisplaysXOnBillfrom it's parent, the SubGroup, and so-on up to the TopLevelGroup. Further, the requirements will never require that the model extends beyond three levels, so there is no need for flexibility in that area. Is there a design smell from several tables which describe very similar entities having almost identical fields? If so, what might be a better design of the example above? I'm using the phrase "design smell" to indicate a possible problem. Of course, in any given situation, a particular design might well be the best solution. I'm looking for a more general answer - wondering what might be wrong with this design and what might be the better design were that the case. Possibly related, but not primary questions: Is this database schema in a reasonably normal form (e.g. to 3NF), insofar as can be told from the information I've provided. I can't see a problem with the requirements of 2NF and 3NF, except in their inheriting the requirements of 1NF. Is 1NF satisfied though? Are repeating groups allowed in different tables? Is there a best-practice method for implementing the inheritance relationship in a database as I require? The method above feels clunky to me because any query on the SubSubGroup necessarily needs to join onto the SubGroup and the TopLevelGroup tables to collect inherited facts, which can make even trivial joins requiring facts from the SubSubGroup table rather long-winded. There are, of course, political considerations to making a relatively large change like this. For the purpose of this question, I'm happy to ignore that fact in the interests of keeping the answers ring-fenced to the technical problem.

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Design Technique: How to design a complex system for processing orders, products and units.

    - by Shyam
    Hi, Programming is fun: I learned that by trying out simple challenges, reading up some books and following some tutorials. I am able to grasp the concepts of writing with OO (I do so in Ruby), and write a bit of code myself. What bugs me though is that I feel re-inventing the wheel: I haven't followed an education or found a book (a free one that is) that explains me the why's instead of the how's, and I've learned from the A-team that it is the plan that makes it come together. So, armed with my nuby Ruby skills, I decided I wanted to program a virtual store. I figured out the following: My virtual Store will have: Products and Services Inventories Orders and Shipping Customers Now this isn't complex at all. With the help of some cool tools (CMapTools), I drew out some concepts, but quickly enough (thanks to my inferior experience in designing), my design started to bite me. My very first product-line were virtual "laptops". So, I created a class (Ruby): class Product attr_accessor :name, :price def initialize(name, price) @name = name @price = price end end which can be instantiated by doing (IRb) x = Product.new("Banana Pro", 250) Since I want my virtual customers to be able to purchase more than one product, or various types, I figured out I needed some kind of "Order" mechanism. class Order def initialize(order_no) @order_no = order_no @line_items = [] end def add_product(myproduct) @line_items << myproduct end def show_order() puts @order_no @line_items.each do |x| puts x.name.to_s + "\t" + x.price.to_s end end end that can be instantiated by doing (IRb) z = Order.new(1234) z.add_product(x) z.show_order Splendid, I have now a very simple ordering system that allows me to add products to an order. But, here comes my real question. What if I have three models of my product (economy, business, showoff)? Or have my products be composed out of separate units (bigger screen, nicer keyboard, different OS)? Surely I could make them three separate products, or add complexity to my product class, but I am looking for are best practices to design a flexible product object that can be used in the real world, to facilitate a complex system. My apologies if my grammar and my spelling are with error, as english is not my first language and I took the time to check as far I could understand and translate properly! Thank you for your answers, comments and feedback!

    Read the article

  • Why is my Interface Builder wiring so bonkers and what can I do to straighten it out?

    - by editor
    I've been working on an iPhone application in XCode and Interface Builder of the Tab Bar project type. After getting a table view of topics (business sectors) working fine I realized that I would need to add a Navigation Control to allow the user to drill into a subtopics (subsectors) table. As a green Objective-C developer, that was confusing, but I managed to get it working by reading various documentation trying out a few different IB options. My current setup is a Tab Bar Controller with Tab 1 as a Navigation Controller and Tab 2 a plain view with a Table View placed into it. The wiring works: I can log when table rows are selected and I'm ready to push a new View Controller onto the stack so that I can display the subtopics Table View. My problem: For some reason the first tab's Table View is a delegate and dataSource of the second ta. It doesn't make sense to me and I can't figure out why that's the only setup that works. Here is the wiring: Navigation Controller (Sectors) is a delegate of Tab Bar Navigation Bar is a delegate of Navigation Controller (Sectors) View Contoller (Sectors) has a view of Table View Table View (in Navigation Controller (Sectors)) is a delegate of First View Controller (Companies) Table View (in Navigation Controller (Sectors)) is a dataSource outlet of First View Controller (Companies) First View Controller (Companies) First View Contoller (Sectors) has a view of Table View Table View (in First View Controller (Companies)) is not hooked up to a dataSource outlet and is not a delegate When I click the tab buttons and look at the Inspector I see that the first tab is correctly hooked up to my MainWindow.xib and the second tab has selected a nib called SecondView.xib. It's in the File's Owner of MainWindow.xib where I inherit UITableViewDataSource and UITableViewDelegate (and also UITabBarControllerDelegate) in the .h, and in the .m where I implement the delegate methods. Why does this setup only work when the Table View in my first tab (View Controller (Sectors)) is a delegate and dataSource of the second tab? I'm confused: why wouldn't it need to be hooked up to the Navigation Controller-enabled tab in which the Table View is seen (Navigation Controller (Sectors))? The Table View seen on the second tab has neither dataSource and is not a delegate. I'm having trouble getting a pushViewController to fire (self.navigationController is not nil but the new View Controller still doesn't load) and I suspect that I need to work out this IB wiring issue before I can troubleshoot why the Nav Controller won't push a new View Controller onto the stack. if(nil == self.navigationController) { NSLog(@"self.navigationController is nil."); } else { NSLog(@"self.navigationController is not nil."); SectorList *subsectorViewController = [[SectorList alloc] initWithNibName:@"SectorList" bundle:nil]; subsectorViewController.title = @"Subsectors"; [[self navigationController] pushViewController:subsectorViewController animated:YES]; [subsectorViewController release]; }

    Read the article

< Previous Page | 836 837 838 839 840 841 842 843 844 845 846 847  | Next Page >