Search Results

Search found 243 results on 10 pages for 'imho'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • tikz: set appropriate x value for a node

    - by basweber
    This question resulted from the question here I want to produce a curly brace which spans some lines of text. The problem is that I have to align the x coordinate manually, which is not a clean solution. Currently I use \begin{frame}{Example} \begin{itemize} \item The long Issue 1 \tikz[remember picture] \node[coordinate,yshift=0.7em] (n1) {}; \\ spanning 2 lines \item Issue 2 \tikz[remember picture] \node[coordinate, xshift=1.597cm] (n2) {}; \item Issue 3 \end{itemize} \visible<2->{ \begin{tikzpicture}[overlay,remember picture] \draw[thick,decorate,decoration={brace,amplitude=5pt}] (n1) -- (n2) node[midway, right=4pt] {One and two are cool}; \end{tikzpicture} } % end visible \end{frame} which produces the desired result: The unsatisfying thing is, that I had to figure out the xshift value of 1.597cm by trial and error (more or less) Without xshift argument the result is: I guess there is an elegant way to avoid the explicit xshift value. The best way would it imho be to calculate the maximum x value of two nodes and use this, (as already suggested by Geoff) But it would already be very handy to be able to explicitly define the absolute xvalues of both nodes while keeping their current y values. This would avoid the fiddly procedure of adapting the third post decimal position to ensure that the brace looks vertical.

    Read the article

  • Pair-wise iteration in C# or sliding window enumerator

    - by f3lix
    If I have an IEnumerable like: string[] items = new string[] { "a", "b", "c", "d" }; I would like to loop thru all the pairs of consecutive items (sliding window of size 2). Which would be ("a","b"), ("b", "c"), ("c", "d") My solution was is this public static IEnumerable<Pair<T, T>> Pairs(IEnumerable<T> enumerable) { IEnumerator<T> e = enumerable.GetEnumerator(); e.MoveNext(); T current = e.Current; while ( e.MoveNext() ) { T next = e.Current; yield return new Pair<T, T>(current, next); current = next; } } // used like this : foreach (Pair<String,String> pair in IterTools<String>.Pairs(items)) { System.Out.PrintLine("{0}, {1}", pair.First, pair.Second) } When I wrote this code, I wondered if there are already functions in the .NET framework that do the same thing and do it not just for pairs but for any size tuples. IMHO there should be a nice way to do this kind of sliding window operations. I use C# 2.0 and I can imagine that with C# 3.0 (w/ LINQ) there are more (and nicer) ways to do this, but I'm primarily interested in C# 2.0 solutions. Though, I will also appreciate C# 3.0 solutions.

    Read the article

  • Practices for keeping JavaScript and CSS in sync?

    - by Rene Saarsoo
    I'm working on a large JavaScript-heavy app. Several pieces of JavaScript have some related CSS rules. Our current practice is for each JavaScript file to have an optional related CSS file, like so: MyComponent.js // Adds CSS class "my-comp" to div MyComponent.css // Defines .my-comp { color: green } This way I know that all CSS related to MyComponent.js will be in MyComponent.css. But the thing is, I all too often have very little CSS in those files. And all too often I feel that it's too much effort to create a whole file to just contain few lines of CSS - it would be easier to just hardcode the styles inside JavaScript. But this would be the path to the dark side... Lately I've been thinking of embedding the CSS directly inside JavaScript - so it could still be extracted in the build process and merged into one large CSS file. This way I wouldn't have to create a new file for every little CSS-piece. Additionally when I move/rename/delete the JavaScript file I don't have to additionally move/rename/delete the CSS file. But how to embed CSS inside JavaScript? In most other languages I would just use string, but JavaScript has some issues with multiline strings. The following looks IMHO quite ugly: Page.addCSS("\ .my-comp > p {\ font-weight: bold;\ color: green;\ }\ "); What other practices have you for keeping your JavaScript and CSS in sync?

    Read the article

  • Is there any Java Decompiler that can correctly decompile calls to overloaded methods?

    - by mihi
    Consider this (IMHO simple) example: public class DecompilerTest { public static void main(String[] args) { Object s1 = "The", s2 = "answer"; doPrint((Object) "You should know:"); for (int i = 0; i < 2; i++) { doPrint(s1); doPrint(s2); s1 = "is"; s2 = new Integer(42); } System.out.println(); } private static void doPrint(String s1) { System.out.print("Wrong!"); } private static void doPrint(Object s1) { System.out.print(s1 + " "); } } Compile it with source/target level 1.1 without debug information (i.e. no local variable information should be present) and try to decompile it. I tried Jad, JD-GUI and Fernflower, and all of them got at least one of the call wrong (i. e. the program printed "Wrong!" at least once) Is there really no java decompiler that can infer the right casts so that it will not call the wrong overload?

    Read the article

  • How to implement custom JSF component for drawing chart?

    - by Roman
    I want to create a component which can be used like: <mc:chart data="#{bean.data}" width="200" height="300" /> where #{bean.data} returns a collection of some objects or chart model object or something else what can be represented as a chart (to put it simple, let's assume it returns a collection of integers). I want this component to generate html like this: <img src="someimg123.png" width="200" height="300"/> The problem is that I have some method which can receive data and return image, like: public RenderedImage getChartImage (Collection<Integer> data) { ... } and I also have a component for drawing dynamic image: <o:dynamicImage width="200" height="300" data="#{bean.readyChartImage}/> This component generates html just as I need but it's parameter is array of bytes or RenderedImage i.e. it needs method in bean like this: public RenderedImage getReadyChartImage () { ... } So, one approach is to use propertyChangedListener on submit to set data (Collection<Integer>) for drawing chart and then use <o:dynamicImage /> component. But I'd like to create my own component which receives data and draws chart. I'm using facelets but it's not so important indeed. Any ideas how to create the desired component? P.S. One solution I was thinking about is not to use <o:dynamicImage/> and use some servlet to stream image. But I don't know how to implement that correctly and how to tie jsf component with servlet and how to save already built chart images (generating new same image for each request can cause performance problems imho) and so on..

    Read the article

  • Serializing QGraphicsScene contents

    - by Rob
    I am using the Qt QGraphicsScene class, adding pre-defined items such as QGraphicsRectItem, QGraphicsLineItem, etc. and I want to serialize the scene contents to disk. However, the base QGraphicsItem class (that the other items I use derive from) doesn't support serialization so I need to roll my own code. The problem is that all access to these objects is via a base QGraphicsItem pointer, so the serialization code I have is horrible: QGraphicsScene* scene = new QGraphicsScene; scene->addRect(QRectF(0, 0, 100, 100)); scene->addLine(QLineF(0, 0, 100, 100)); ... QList<QGraphicsItem*> list = scene->items(); foreach (QGraphicsItem* item, items) { if (item->type() == QGraphicsRectItem::Type) { QGraphicsRectItem* rect = qgraphicsitem_cast<QGraphicsRectItem*>(item); // Access QGraphicsRectItem members here } else if (item->type() == QGraphicsLineItem::Type) { QGraphicsLineItem* line = qgraphicsitem_cast<QGraphicsLineItem*>(item); // Access QGraphicsLineItem members here } ... } This is not good code IMHO. So, instead I could create an ABC class like this: class Item { public: virtual void serialize(QDataStream& strm, int version) = 0; }; class Rect : public QGraphicsRectItem, public Item { public: void serialize(QDataStream& strm, int version) { // Serialize this object } ... }; I can then add Rect objects using QGraphicsScene::addItem(new Rect(,,,)); But this doesn't really help me as the following will crash: QList<QGraphicsItem*> list = scene->items(); foreach (QGraphicsItem* item, items) { Item* myitem = reinterpret_class<Item*>(item); myitem->serialize(...) // FAIL } Any way I can make this work?

    Read the article

  • super light software development process

    - by Walty
    hi, For the development process I have involved so far, most have teams of SINGLE member, or occasionally two. We used python + django for the major development, the development process is actually very fast, and we do have code reviews, design pattern discussions, and constant refactoring. Though team size is small, I do think there are some development processes / best practices that could be enforced. For example, using svn would be definitely better than regular copy backup. I did read some articles & books about Agile, XP & continuous integration, I think they are nice, but still too heavy for this case (team of 1 or 2, and fast coding). For example, IMHO, with nice design pattern, and iterative development + refactoring, the TDD MIGHT be an overkill, or at least the overhead does not out-weight the advantages. And so is the pair programming. The automated testing is a nice idea, but it seems not technically feasible for every project. our current practices are: svn + milestone + code review I wonder if there are development processes / best practices specifically targeted on such super light teams? thanks.

    Read the article

  • Who architected / designed C++'s IOStreams, and would it still be considered well-designed by today'

    - by stakx
    First off, it may seem that I'm asking for subjective opinions, but that's not what I'm after. I'd love to hear some well-grounded arguments on this topic. In the hope of getting some insight into how a modern streams / serialization framework ought to be designed, I recently got myself a copy of the book Standard C++ IOStreams and Locales by Angelika Langer and Klaus Kreft. I figured that if IOStreams wasn't well-designed, it wouldn't have made it into the C++ standard library in the first place. After having read various parts of this book, I am starting to have doubts if IOStreams can compare to e.g. the STL from an overall architectural point-of-view. Read e.g. this interview with Alexander Stepanov (the STL's "inventor") to learn about some design decisions that went into the STL. What surprises me in particular: It seems to be unknown who was responsible for IOStreams' overall design (I'd love to read some background information about this — does anyone know good resources?); Once you delve beneath the immediate surface of IOStreams, e.g. if you want to extend IOStreams with your own classes, you get to an interface with fairly cryptic and confusing member function names, e.g. getloc/imbue, uflow/underflow, snextc/sbumpc/sgetc/sgetn, pbase/pptr/epptr (and there's probably even worse examples). This makes it so much harder to understand the overall design and how the single parts co-operate. Even the book I mentioned above doesn't help that much (IMHO). Thus my question: If you had to judge by today's software engineering standards (if there actually is any general agreement on these), would C++'s IOStreams still be considered well-designed? (I wouldn't want to improve my software design skills from something that's generally considered outdated.)

    Read the article

  • What happens when we combine RAII and GOTO ?

    - by Robert Gould
    I'm wondering, for no other purpose than pure curiosity (because no one SHOULD EVER write code like this!) about how the behavior of RAII meshes with the use of Goto (lovely idea isn't it). class Two { public: ~Two() { printf("2,"); } }; class Ghost { public: ~Ghost() { printf(" BOO! "); } }; void foo() { { Two t; printf("1,"); goto JUMP; } Ghost g; JUMP: printf("3"); } int main() { foo(); } When running the following code in VS2005 I get the following output: 1,2,3 BOO! However I imagined, guessed, hoped that 'BOO!' wouldn't actually appear as the Ghost should have never been instantiated (IMHO, because I don't know the actual expected behavior of this code). Any Guru out there knows what's up? Just realized that if I instantiate an explicit constructor for Ghost the code doesn't compile... class Ghost { public: Ghost() { printf(" HAHAHA! "); } ~Ghost() { printf(" BOO! "); } }; Ah, the mystery ...

    Read the article

  • Get :first-letter of :hover element with CSS

    - by Rudie
    Is it possible to get the first letter of an element while in 'hover mode'? This is how it would look - I think - but it's not working in Chrome 10: a:hover:first-letter or a:first-letter:hover Technically (imho) they're not the same. The first takes the first letter of the hovering element. The second takes the entire element if the first letter is hovering. I require the first. As you can see on http://css4.hotblocks.nl (if you have a 1900px screen and a dom inspector) if you uncomment the CSS, both don't work. I want only the first letter of the element to color red, when the entire element is in :hover mode. Is it possible without additional HTML tags? Thanks. -- edit I've changed my online example for the better. CSS is now divided in separate <style> blocks. Makes for easier turning on and off try-outs. Conclusion - so far!? - is this: In Firefox 3.6/4 a:first-letter:hover does nothing (good) and a:hover:first-letter works perfectly (good!). In Chrome 10 a:first-letter:hover does nothing (good) and a:first-letter:hover breaks the previous CSS 'statement'. (In my example it breaks nothing because it's in a separate <style> block.) Which brings us to: once again Google Chrome lags behind Firefox =( --edit

    Read the article

  • Please help clean this loop

    - by Alex Angelini
    I do not code much in Javascript, but I have the following snippet which IMHO looks horrendous and I have to do this nested iteration quite often in my code. Does anyone have a prettier/easier to read solution? function addBrowse(data) { var list = $('<ul></ul>') for(i = 0; i < data.list.length; i++) { var file = list.append('<li class="toLeft">' + data.list[i].name + '</li>') for(j = 0; j < data.list[i].children.length; j++) { var db = file.append('<li>' + data.list[i].children[j].name + '</li>') for(k = 0; k < data.list[i].children[j].children.length; k++) db.append('<li class="toRight">' + data.list[i].children[j].children[k].name + '</li>') } } $('#browse').append(list).show()} Here is a sample data element {"file":"","db":"","tbl":"","page":"browse","list":[ { "name":"/home/alex/GoSource/test1.txt", "children":[ { "name":"go", "children":[ { "name":"validation1", "children":[ ] } ] } ] }, { "name":"/home/alex/GoSource/test2.txt", "children":[ { "name":"go", "children":[ { "name":"validation2", "children":[ ] } ] } ] }, { "name":"/home/alex/GoSource/test3.txt", "children":[ { "name":"go", "children":[ { "name":"validation3", "children":[ ] } ] } ] }]} Thanks a lot

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • C standard addressing simplification inconsistency

    - by Chris Lutz
    Section §6.5.3.2 "Address and indirection operators" ¶3 says (relevant section only): The unary & operator returns the address of its operand. ... If the operand is the result of a unary * operator, neither that operator nor the & operator is evaluated and the result is as if both were omitted, except that the constraints on the operators still apply and the result is not an lvalue. Similarly, if the operand is the result of a [] operator, neither the & operator nor the unary * that is implied by the [] is evaluated and the result is as if the & operator were removed and the [] operator were changed to a + operator. ... This means that this: int *i = NULL; printf("%p", (void *) (&*i) ); printf("%p", (void *) (&i[10]) ); Should be perfectly legal, printing the null pointer (probably 0) and the null pointer plus 10 (probably 10). The standard seems very clear that both of those cases are required to be optimized. However, it doesn't seem to require the following to be optimized: struct { int a; short b; } *s = 0; printf("%p", (void *) (&s->b) ); This seems awfully inconsistent. I can see no reason that the above code shouldn't print the null pointer plus sizeof(int) (possibly 4). Simplifying a &-> expression is going to be the same conceptually (IMHO) as &[], a simple address-plus-offset. It's even an offset that's going to be determinable at compile time, rather than potentially runtime with the [] operator. Is there anything in the rationale about why this is so seemingly inconsistent?

    Read the article

  • Touch draw in Quatz 2D/Core Graphics

    - by OgreSwamp
    Hello, I'm trying to implement "hand draw tool". At the moment algorythm looks like that (I don't insert any code because methods are quite big, will try to explain an idea): Drawing In touchesStarted: method I create NSMutableArray *pointsArray and add point into it. Call setNeedsDisplay: method. In touchesMoved: method I calculate points between last added point from the pointsArray and current point. Add all points to the pointsArray. Call setNeedsDisplay: method. In touchesFinished: event I calculate points between last added point from the array and current point. Set flag touchesWereFinished. Call setNeedsDisplay:. Render: drawRect: method checks is pointsArray != nil and is there any data in it. If there is - it starts to traw circles in each point of this array. If flag touchesWereFinished is set - save current context to the UIImage, release pointsArray, set it to nil and reset the flag. There are a lot disadvantages of this method: It is slow It becomes extremely slow when user touches and move finger for long time. Array becomes enormous "Lines" composed by circles are ugly I would like to change my algorithm to make it bit faster and line smoother. In result I would like to have lines like on the picture at following URL (sorry, not enough reputation to insert an image): http://2.bp.blogspot.com/_r5VzEAUYXJ4/SrOYp8tJCPI/AAAAAAAAAMw/ZwDKXiHlhV0/s320/SketchBook+Mobile(4).png Can you advice me, ho I can draw lines this way (smooth and slim on the edges)? I thought to draw circles with alpha gradient on the edges (to make lines smoother), but it will be extremely slowly IMHO. Thanks for help

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • How to make write operation idempotent?

    - by Morgan Cheng
    I'm reading article about recently release Gizzard sharding framework by twitter(http://engineering.twitter.com/2010/04/introducing-gizzard-framework-for.html). It mentions that all write operations must be idempotent to make sure high reliability. According to wikipedia, "Idempotent operations are operations that can be applied multiple times without changing the result." But, IMHO, in Gazzard case, idempotent write operation should be operations that sequence doesn't matter. Now, my question is: How to make write operation idempotent? The only thing I can image is to have a version number attached to each write. For example, in blog system. Each blog must have a $blog_id and $content. In application level, we always write a blog content like this write($blog_id, $content, $version). The $version is determined to be unique in application level. So, if application first try to set one blog to "Hello world" and second want it to be "Goodbye", the write is idempotent. We have such two write operations: write($blog_id, "Hello world", 1); write($blog_id, "Goodbye", 2); These two operations are supposed to changed two different records in DB. So, no matter how many times and what sequence these two operations executed, the results are same. This is just my understanding. Please correct me if I'm wrong.

    Read the article

  • The perfect web UI framework (with Microsoft stack?) - architecture question?

    - by Igorek
    I'm looking for suggestions for the following issue, and I realize there is really not going to be a perfect answer to my question: I have a UI built in WinForms.NET (v4.0 framework) with WCF back-end and EF4 model objects, that I am looking to port to the web. UI is not huge and is not super complex and is structured well. But it is not a super simple system either. I am looking to pick a technology stack for the web-frontend that will target desktop & partially mobile platforms, provide a good development platform to build on, and facilitate code reuse across UI and back-end tiers... I would rather avoid: custom coding of UI-centric scripts, because they are hard to debug, non-compiled, usually a maintenance nightmare, almost always start to contain business logic, and duplicate some of the logic that back-end tiers have (especially validation) custom-coding for Desktop Web and Mobile Web UI's separately (although I realize that mobile web UI will likely contain fewer of data-entry screens and more reporting screens) non-.NET technology stacks I would love to: target the reporting capabilities of the system toward mobile web browsers not have to write a single line of script (javascript, jquery, etc.) utilize a good collection of controls that produces an elegant UI use .NET for everything The way I see it right now, I need to re-write this app in Silverlight, utilize a 3rd party UI framework like Telerik, and re-do the reports UI again for mobile platforms separately. However, I'm rather concerned about the shelf-life of Silverlight and the needed to deploy a different architecture to deal with mobile platform. Is there an ASP.NET/MVC/Ajax architecture/framework/library that would allow me to get at the power of .NET and without painful (imho) client-side scripting, while providing a decent user experience Thank you

    Read the article

  • Use cases of [ordered], the new PowerShell 3.0 feature

    - by Roman Kuzmin
    PowerShell 3.0 CTP1 introduces a new feature [ordered] which is somewhat a shortcut for OrderedDictionary. I cannot imagine practical use cases of it. Why is this feature really useful? Can somebody provide some useful examples? Example: this is, IMHO, rather demo case than practical: $a = [ordered]@{a=1;b=2;d=3;c=4} (I do not mind if it is still useful, then I am just looking for other useful cases). I am not looking for use cases of OrderedDictionary, it is useful, indeed. But we can use it directly in v2.0 (and I do a lot). I am trying to understand why is this new feature [ordered] needed in addition. Collected use cases from answers: $hash = [ordered]@{} is shorter than $hash = New-Object System.Collections.Specialized.OrderedDictionary N.B. ordered is not a real shortcut for the type. New-Object ordered does not work. N.B. 2: But this is still a good shortcut because (I think, cannot try) it creates typical for PowerShell case insensitive dictionary. The equivalent command in v2.0 is too long, indeed: New-Object System.Collections.Specialized.OrderedDictionary]([System.StringComparer]::OrdinalIgnoreCase)

    Read the article

  • `enable_shared_from_this` has a non-virtual destructor

    - by Shtééf
    I have a pet project with which I experiment with new features of the upcoming C++0x standard. While I have experience with C, I'm fairly new to C++. To train myself into best practices, (besides reading a lot), I have enabled some strict compiler parameters (using GCC 4.4.1): -std=c++0x -Werror -Wall -Winline -Weffc++ -pedantic-errors This has worked fine for me. Until now, I have been able to resolve all obstacles. However, I have a need for enable_shared_from_this, and this is causing me problems. I get the following warning (error, in my case) when compiling my code (probably triggered by -Weffc++): base class ‘class std::enable_shared_from_this<Package>’ has a non-virtual destructor So basically, I'm a bit bugged by this implementation of enable_shared_from_this, because: A destructor of a class that is intended for subclassing should always be virtual, IMHO. The destructor is empty, why have it at all? I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this. But I'm looking for ways to deal with this, so my question is really, is there a proper way to deal with this? And: am I correct in thinking that this destructor is bogus, or is there a real purpose to it?

    Read the article

  • World Economic Crisis. IT prospects

    - by Andrew Florko
    There was alike question in 2008, 2 years passed. Please, share your expectations about IT market and employment in the next year or two (or so far you can predict). IMHO Russia (my native country) fully met Crisis in spring, 2008. Stock markets shrank 3(!) times during half a year. Many developers were fired those days but I suppose just because business was shocked and freezed some projects. Developers expected +20% salary growth per year in 2004-2007 (Developer salary in Moscow was about 2-3K$ in early 2008). Then there was 30% (very subjective) salary cut-off in 2008 and salaries were frozen till 2009. Now things are slowly coming back to 2008. Looking in the future I expect pessimistic scenario and another crash. Our economic depends more and more on oil & gas every year. IT that serves industry will be shrinked because we can't compete to China in real production. Due to high currency board (rubble is strong compared to dollar) we can't rely on offshore programming. Our officials are concerned on innovative economic breakthrough but it's an ordinary budget money assignemtn in practice. I don't believe in innovations either because who require innovations if you have debts and tomorrow is vapor?

    Read the article

  • Finding What You Need in R: function arguments/parameters from outside the function's package

    - by doug
    Often in R, there are a dozen functions scattered across as many packages--all of which have the same purpose but of course differ in accuracy, performance, theoretical rigor, and so on. How do you gather all of these in one place before you start your task? So for instance: the generic plot function. Setting secondary ticks is much easier (IMHO) using a function outside of the base package, minor.tick(nx=n, ny=n, tick.ratio=n), found in Hmisc. Of course, that doesn't show up in plot's docstring. Likewise, the data-input arguments to 'plot' can be supplied by an object returned from the function 'hexbin', again, from a library outside of the base installation (where 'plot' resides). What would be great obviously is a programmatic way to gather these function arguments from the various libraries and put them in a single namespace. edit: (trying to re-state my example just above more clearly:) the arguments to plot supplied in the base package for, e.g., setting the axis tick frequency are xaxp/yaxp; however, one can also set a/t/f via a function outside of the base package, again, as in the minor.tick function from the Hmisc package--but you wouldn't know that just from looking at the plot method signature. Is there a meta function in R for this? So far, as i come across them, i've been manually gathering them in a TextMate 'snippet' (along with the attendant library imports). This isn't that difficult or time consuming, but i can only update my snippet as i find out about these additional arguments/parameters. Is there a canonical R way to do this, or at least an easier way? Just in case that wasn't clear, i am not talking about the case where multiple packages provide functions directed to the same statistic or view (e.g., 'boxplot' in the base package; 'boxplot.matrix' in gplots; and 'bplots' in Rlab). What i am talking is the case in which the function name is the same across two or more packages.

    Read the article

  • assembling an object graph without an ORM -- in the service layer or data layer?

    - by Hans Gruber
    At my current gig, our persistence layer uses IBatis going against SQL Server stored procedures (puke). IMHO, this approach has many disadvantages over the use of a "true" ORM such NHibernate or EF, but the one I'm trying to address here revolves around all the boilerplate code needed to map data from a result set into an object graph. Say I have the following DTO object graph I want to return to my presentation layer: IEnumerable<CustomerDTO> |--> IEnumerable<AddressDTO> |--> LatestOrderDTO The way I've implemented this is to have a discrete method in my DAO class to return each IEnumerable<*DTO>, and then have my service class be responsible for orchestrating the calls to the DAO. It then returns the fully assembled object graph to the client: public class SomeService(){ public SomeService(IDao someDao){ this._someDao = someDao; } public IEnumerable<CustomerDTO> ListCustomersForHistory(int brokerId){ var customers = _someDao.ListCustomersForBroker(brokerId); foreach (customer in customers){ customer.Addresses = someDao.ListCustomersAddresses(brokerId); customer.LatestOrder = someDao.GetCustomerLatestOrder(brokerId); } } return customers; } My question is should this logic belong in the service layer or the should I make my DAO such that it instead returns the assembled object graph. If I was using NHibernate, I assume that this kind of relationship association between objects comes for "free"?

    Read the article

  • jsf and eclipse setup. Lib in Explorer is different than lib in tomcat. How come?

    - by user384706
    Hi, I am starter in JSF2.0, and I have a question related to Eclipse (I am using Helios). 1)I create a Dynamic Project 2)I add JSF project facet. 3)I choose JSF user library (I have created it using MyFaces) All ok so far. I notice though that in the Project Explorer in WebContent/WEB-INF/lib the lib directory is empty instead of having MyFaces jars. The application works fine though. I looked into this and the jars are actually being placed in the corresponding lib directory of the app deployed under wtpwebapps of the Tomcat instance of eclipse in the .pluggins directory. Ok, it works but IMHO it is incorrect to have the Project Explorer inconsistent with the directories actually deployed. I.e. lib is shown empty in Project Explorer but with jars under wtpwebapps. Am I wrong to dislike this inconsistency? Is this how it should work or am I doing something wrong in the way I am setting my project? Thanks!

    Read the article

  • Unecrypted Image of Truecrypt-Encrypted System Partition

    - by Dexter
    The general tenor around the internet seems to be that you can't create images of system partitions that have been encrypted (with truecrypt) other than with dd or similar sector-by-sector copy tools. These files however are very impractical given their size (and are obviously incompressible) which makes keeping multiple states/backups of your system partition rather expensive (..especially considering current hdd prices). The problem is that backup tools (like Acronis True Image, Clonezilla, etc.) won't give you the option to create an image of (mounted/opened) Truecrypt partitions, or that there is no recovery environment for restoring the backup, that would allow to run truecrypt before doing any actual restoring. After some trial and error however, I believe I have found a very simple way. Since Truecrypt (running in Linux) creates a virtual block device, that it uses for mounting the unencrypted partitions into the file system, partclone can be used for creating/restoring images. What I did: boot up a linux live disk mount/open the drive/device/partition in truecrypt unmount the filesystem mount point again, like so: umount /media/truecryptX ("X" being the partition number assigend by truecrypt) use partclone (this is what clonezilla would do too, except that clonezilla only offers you to back up real drive partitions, not virtual block devices): partclone.ntfs -c -s /dev/mapper/truecryptX -o nameOfBackupFile for restoring steps 1-3 remain the same, and step 4 is partclone.ntfs -r -s nameOfBackupFile -o /dev/mapper/truecryptX A backup and test-restore of the system (with this method) seems to have worked fine (and the changed settings were reverted to the backup-state). The backup file is ~40 GB (and compressible down to <8GB with 7zip/LZMA2 on the "fast" setting). I can't quite believe that I'm the only one that wants to create images of encrypted drives, but doesn't want to waste 100GB on the backup of one single system state. So my question now is, given how simple this was, and that no one seems to mention anywhere that this is possible - did I miss something? or did I do something wrong? Is there any situation that I didn't think of where this method will fail? Obviously, the backup file needs to be stored in some other encrypted place in order to still remain confidential, since it is unencrypted. Also, in order to do a full "bare metal" restore, one would have to actually first (re-)install Windows, encrypt it, and only then restore the backup file. The funny thing however is that you won't need to backup any partition tables, etc. since the reinstall will effectively take care of that. Is there anything else? This is imho still a lot better than having sector-by-sector images..

    Read the article

  • Entity Framework 4.0: Creating objects of correct type when using lazy loading

    - by DigiMortal
    In my posting about Entity Framework 4.0 and POCOs I introduced lazy loading in EF applications. EF uses proxy classes for lazy loading and this means we have new types in that come and go dynamically in runtime. We don’t have these types available when we write code but we cannot forget that EF may expect us to use dynamically generated types. In this posting I will give you simple hint how to use correct types in your code. The background of lazy loading and proxy classes As a first thing I will explain you in short what is proxy class. Business classes when designed correctly have no knowledge about their birth and death – they don’t know how they are created and they don’t know how their data is persisted. This is the responsibility of object runtime. When we use lazy loading we need a little bit different classes that know how to load data for properties when code accesses the property first time. As we cannot add this functionality to our business classes (they may be stored through more than one data access technology or by more than one Data Access Layer (DAL)) we create proxy classes that extend our business classes. If we have class called Product and product has lazy loaded property called Customer then we need proxy class, let’s say ProductProxy, that has same public signature as Product so we can use it INSTEAD OF product in our code. ProductProxy overrides Customer property. If customer is not asked then customer is null. But if we ask for Customer property then overridden property of ProductProxy loads it from database. This is how lazy loading works. Problem – two types for same thing As lazy loading may introduce dynamically generated proxy types we don’t know in our application code which type is returned. We cannot be sure that we have Product not ProductProxy returned. This leads us to the following question: how can we create Product of correct type if we don’t know the correct type? In EF solution is simple. Solution – use factory methods If you are using repositories and you are not using factories (imho it is pretty pointless with mapper) you can add factory methods to your EF based repositories. Take a look at this class. public class Event {     public int ID { get; set; }     public string Title { get; set; }     public string Location { get; set; }     public virtual Party Organizer { get; set; }     public DateTime Date { get; set; } } We have virtual member called Organizer. This property is virtual because we want to use lazy loading on this class so Organizer is loaded only when we ask it. EF provides us with method called CreateObject<T>(). CreateObject<T>() is member of ObjectContext class and it creates the object based on given type. In runtime proxy type for Event is created for us automatically and when we call CreateObject<T>() for Event it returns as object of Event proxy type. The factory method for events repository is as follows. public Event CreateEvent() {     var evt = _context.CreateObject<Event>();     return evt; } And we are done. Instead of creating factory classes we created factory methods that guarantee that created objects are of correct type. Conclusion Although lazy loading introduces some new objects we cannot use at design time because they live only in runtime we can write code without worrying about exact implementation type of object. This holds true until we have clean code and we don’t make any decisions based on object type. EF4.0 provides us with very simple factory method that create and return objects of correct type. All we had to do was adding factory methods to our repositories.

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >