Search Results

Search found 22625 results on 905 pages for 'better'.

Page 148/905 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Saving twice don't update my object in JDO

    - by Javi
    Hello I have an object persisted in the GAE datastore using JDO. The object looks like this: public class MyObject implements Serializable, StoreCallback { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true") private String id; @Persistent private String firstId; ... } As usually when the object is stored for the first time a new id value is generated for the identifier. I need that if I don't provide a value for firstId it sets the same value as the id. I don't want to solve it with a special getter which checks for null value in firstId and then return the id value because I want to make queries relating on firstId. I can do it in this way by saving the object twice (Probably there's a better way to do this, but I'll do it in this way until I find a better one). But it is not working. when I debug it I can see that result.firstId is set with the id value and it seems to be persisted, but when I go into the datastore I see that firstId is null (as it was saved the first time). This save method is in my DAO and it is called in another save method in the service annotated with @Transactional. Does anyone have any idea why the second object in not persisted properly? @Override public MyObject save(MyObject obj) { PersistenceManager pm = JDOHelper.getPersistenceManagerFactory("transactions-optional"); MyObject result = pm.makePersistent(obj); if(result.getFirstId() == null){ result.setFirstId(result.getId()); result = pm.makePersistent(result); } return result; } Thanks.

    Read the article

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • Elegantly handling constraint violations in EJB/JPA environment?

    - by hallidave
    I'm working with EJB and JPA on a Glassfish v3 app server. I have an Entity class where I'm forcing one of the fields to be unique with a @Column annotation. @Entity public class MyEntity implements Serializable { private String uniqueName; public MyEntity() { } @Column(unique = true, nullable = false) public String getUniqueName() { return uniqueName; } public void setUniqueName(String uniqueName) { this.uniqueName = uniqueName; } } When I try to persist an object with this field set to a non-unique value I get an exception (as expected) when the transaction managed by the EJB container commits. I have two problems I'd like to solve: 1) The exception I get is the unhelpful "javax.ejb.EJBException: Transaction aborted". If I recursively call getCause() enough times, I eventually reach the more useful "java.sql.SQLIntegrityConstraintViolationException", but this exception is part of the EclipseLink implementation and I'm not really comfortable relying on it's existence. Is there a better way to get detailed error information with JPA? 2) The EJB container insists on logging this error even though I catch it and handle it. Is there a better way to handle this error which will stop Glassfish from cluttering up my logs with useless exception information? Thanks.

    Read the article

  • Advantages/disadvantages of Python and Ruby

    - by Seburdis
    I know this is going to seem a little like all the other python vs ruby question out there, but I'm not looking specifically to pick one over the other all the time. My question is, essentially, why would you use one language over the other when you are starting a new project? What features does ruby have that python doesn't that would make you decide on it for a given project? What about python over ruby? I was just recently thinking about the differentiation between the two languages because of Jamis Buck's "There is no magic, only awesome" series of articles (4 parts, available here) when I realized I really don't know enough about the two languages to know when to choose one over the other. I'm hoping to get objective answers from people who have experience with both languages, rather than just "python is better, ruby sucks" kind of responses. If you know of a feature in one language that doesn't exist in the other and is great in a certain situation, feel free to chime in and say why you think it's awesome. If you have another language comparable to these that you'd like to suggest pros/cons for, like groovy for example, that would be appreciated too. Some thing I know each language has going for it: Ruby: Awesome metaprogramming Great community Wide selection of Gems Rails Great code readability, usually MacRuby is great for native development on Mac without objc Amazing testing tools (cucumber, rspec, shoulda, autotest, etc.) Python: Whitespace indentation List comprehensions Better functional programming support? Lots of support on linux Easy_install isn't far from gems Great variety of libraries available

    Read the article

  • SQL with Regular Expressions vs Indexes with Logical Merging Functions

    - by geeko
    Hello Lads, I am trying to develop a complex textual search engine. I have thousands of textual pages from many books. I need to search pages that contain specified complex logical criterias. These criterias can contain virtually any compination of the following: A: Full words. B: Word roots (semilar to stems; i.e. all words with certain key letters). C: Word templates (in some languages are filled in certain templates to form various part of speech such as adjactives, past/present verbs...). D: Logical connectives: AND/OR/XOR/NOT/IF/IFF and parentheses to state priorities. Now, would it be faster to have the pages' full text in database (not indexed) and search though them all using SQL and Regular Expressions ? Or would it be better to construct indexes of word/root/template-page-location tuples. Hence, we can boost searching for individual words/roots/templates. However, it gets tricky as we interdouce logical connectives into our query. I thought of doing the following steps in such cases: 1: Seperately search for each individual words/roots/templates in the specified query. 2: On priority bases, we merge two result lists (from step 1) at a time depedning on the logical connective For example, if we are searching for "he AND (is OR was)": 1: We shall search for "he", "is" and "was" seperately and get result lists for each word. 2: Merge the result lists of "is" and "was" using the merging function OR-MERGE 3: Merge the merged result list from the OR-MERGE function with the one of "he" using the merging function AND-MERGE The result of step 3 is then returned as the result of the specified query. What do you think gurues ? Which is faster ? Any better ideas ? Thank you all in advance.

    Read the article

  • Which SCM/VCS cope well with moving text between files?

    - by pfctdayelise
    We are having havoc with our project at work, because our VCS is doing some awful merging when we move information across files. The scenario is thus: You have lots of files that, say, contain information about terms from a dictionary, so you have a file for each letter of the alphabet. Users entering terms blindly follow the dictionary order, so they will put an entry like "kick the bucket" under B if that is where the dictionary happened to list it (or it might have been listed under both B, bucket and K, kick). Later, other users move the terms to their correct files. Lots of work is being done on the dictionary terms all the time. e.g. User A may have taken the B file and elaborated on the "kick the bucket" entry. User B took the B and K files, and moved the "kick the bucket" entry to the K file. Whichever order they end up getting committed in, the VCS will probably lose entries and not "figure out" that an entry has been moved. (These entries are later automatically converted to an SQL database. But they are kept in a "human friendly" form for working on them, with lots of comments, examples etc. So it is not acceptable to say "make your users enter SQL directly".) It is so bad that we have taken to almost manually merging these kinds of files now, because we can't trust our VCS. :( So what is the solution? I would love to hear that there is a VCS that could cope with this. Or a better merge algorithm? Or otherwise, maybe someone can suggest a better workflow or file arrangement to try and avoid this problem?

    Read the article

  • How to feature-detect/test for specific jQuery (and Javascript) methods/functions used

    - by Zildjoms
    good day everyone, hope yer all doin awesome am very new to javascript and jquery, and i (think) i have come up with a simple fade-in/out implementation on a site am workin on (check out http://www.s5ent.com/expandjs.html - if you have the time to check it for inefficiency or what that'd be real sweet). i use the following functions/methods/collections and i would like to do a feature test before using them. uhm.. how? or is there a better way to go about this? jQuery $ .fadeIn([duration]) .fadeOut([duration]) .attr(attributeName,value) .append(content) .each(function(index,Element)) .css(propertyName,value) .hover(handlerIn(eventObject),handlerOut(eventObject)) .stop([clearQueue],[jumpToEnd]) .parent() .eq(index) JavaScript setInterval(expression,timeout) clearInterval(timeoutId) setTimeout(expression,timeout) clearTimeout(timeoutId) i tried looking into jquery.support for the jquery ones, but i find myself running into conceptual problems with it, i.e. for fadein/fadeout, i (think i) should test for $.support.opacity, but that would be false in ie whereas ie6+ could still fairly render the fades. also am using jquery 1.2.6 coz that's enough for what i need. the support object is in 1.3. so i'm hoping to avoid dragging-in more unnecessary code if i can. i also worked with browser sniffing, no matter how frowned-upon. but that's also a bigger problem for me because of non-standard ua strings and spoofing and everything else am not aware of. so how do you guys think i should go about this? or should i even? is there a better way to go about making sure that i don't run code that'll eventually break the page? i've set it up to degrade into a css hover when javascript ain't there.. expertise needed. much appreciated, thanks guyz!

    Read the article

  • Best terminal environment for Cygwin/Windows?

    - by Anders Sandvig
    Today I run Cygwin with rxvt using the following startup line: rxvt -bg black -sl 8192 -fg white -sr -g 150x56 -fn "Fixedsys" -e /usr/bin/bash --login -i This gives me a resizeable native Windows window which is much better than the standard "DOS box" the default cygwin.bat provides. However, the current configuration does have a couple of issues: I am not able to enter non-ASCII characters into the terminal window (i.e. æ, ø, å and Æ, Ø, Å, which I use semi-frequently. In fact, the terminal will not even accept them when I paste them into the window. If I paste a string like "bølle" (Norwegian for "bulley"), all I get is "blle". I am not able to render UTF-8 character, they only show as ?, even if they are supported by the font (i.e. when rendering the same characters in ISO-8859-1 they show just fine.). I am running English Windows Vista with locale and keyboard layout set to Norwegian (ISO-8859-1 character set?), but I've had the exact same issue on Windows 2000 and XP. Anyone knows how to fix this (i.e. a better way to configure rxvt)? Apart from the issues mentioned above, I'm very happy with rxvt, so if I find a way to resolve them I'd like to continue using it. However, if the issues are not (easily) solvable, are the any other good terminal solutions for Cygwin? Update The solution provided by Andy and Mattias (editing the .inputrc file) did solve the input problem, but output rendering is still an issue. Output is fine when I render in ISO-8859-1, but when using UTF-8 I only get ? for non-ASCII characters. This behavior is consistent between rxvt, urxvt (under Cygwin XFree X Server), mintty and PuttyCyg. Is there a similar configuration file where output encoding can be set (i.e. the equivalent of setting output locale on a Linux system)?

    Read the article

  • Passing arguments to objects created using the new operator?

    - by Abhijit
    Hi guys, I have a small C++ problem to which I don't know the best solution. I have two classes A and B as follows: class A { int n; B* b; public: A(int num): n(num) { b = new B[n]; for (int i = 0; i < n; i++) { b[i].setRef(this); } } ~A() { delete [] b; } }; class B { A* a; public: B() { } B(A* aref) { a = aref; } void setRef(A* aref) { a = aref; } }; I am creating an object of class A by passing to its constructor the number of objects of class B I want to be created. I want every object of class B to hold a pointer to the class A object that creates it. I think the best way to do this would be by passing the pointer to the class A object as a constructor argument to the class B object. However, since I'm using the new operator, the no-args constructor for class B is called. As a result, the only solution I can see here is calling the setRef(A*) method for every object of class B after it has been constructed using the new operator. Is there a better solution/design pattern that would be more applicable here? Would using placement new for class B be a better solution? Thanks in advance for your help.

    Read the article

  • Java resource closing

    - by Bob
    Hi, I'm writing an app that connect to a website and read one line from it. I do it like this: try{ URLConnection connection = new URL("www.example.com").openConnection(); BufferedReader rd = new BufferedReader(new InputStreamReader(connection.getInputStream())); String response = rd.readLine(); rd.close(); }catch (Exception e) { //exception handling } Is it good? I mean, I close the BufferedReader in the last line, but I do not close the InputStreamReader. Should I create a standalone InputStreamReader from the connection.getInputStream, and a BufferedReader from the standalone InputStreamReader, than close all the two readers? I think it will be better to place the closing methods in the finally block like this: InputStreamReader isr = null; BufferedReader br = null; try{ URLConnection connection = new URL("www.example.com").openConnection(); isr = new InputStreamReader(connection.getInputStream()); br = new BufferedReader(isr); String response = br.readLine(); }catch (Exception e) { //exception handling }finally{ br.close(); isr.close(); } But it is ugly, because the closing methods can throw exception, so I have to handle or throw it. Which solution is better? Or what would be the best solution?

    Read the article

  • Why does adding Crossover to my Genetic Algorithm gives me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • What are some programming techniques for converting SD images to HD images

    - by Dr Dork
    I'm taking programming class and instructor loves to work with images so most of our assignments involve manipulating raw RGB image data. One of our assignments is to implement a standard image converter that converts SD images to HD images and vice versa. I always take advantage of these types of assignments to go a little beyond what we were asked to do, so I added a basic anti-aliasing process that uses the average pixel color of the 3x3 surrounding pixels to improve the converted image. While it helps a bit, the resulting image still doesn't look good, which is ok because it's not expected to for the assignment. I've learned that converting an SD to HD images has shown to be much harder than down sampling to SD from HD just because SD to HD effectively involves increasing resolution when it is not there. Obviously, it is hard to create pixels from nothing, but I'd like enhance my anti-aliasing to something that provides better results when upscaling an image. Most of the techniques I find and read on the internet are far beyond my level of image processing and programming. Can anybody suggest any better methods or processes to create good HD content from SD content that may be within my programming skill level? I know that's a difficult thing to gauge since you don't know me, but perhaps knowing that I can write c++ code to read in raw RGB data and upscale/downscale it with simple average-anti-aliasing will give you an idea. Thanks in advance for all your help!

    Read the article

  • Creating a scm browser for an RCP application.

    - by mdamman
    I have an RCP app that saves its project as an xml file and currently the user just selections a directory to save that file and then uses the open file dialog to open the project. We are thinking about enhancing it to allow users to check in/out from a source code manager. This will make it easier for users to share their projects with each others with all the benefits of a scm. I need something similar to Subclipses, but i was thinking of using the maven svn plugin so that it is more flexible which on which scm is used. It would probably better to keep it simple because most users won't have a clue what a scm is. An ideal would be just having a Checkout menu option which opens a dialog similar to the Open File dialog. I was wondering if anyone had an example of how to use the maven scm. What calls to make to set the scm location and to get the file? Or if there is a better way of going about this. Thanks!

    Read the article

  • Destructuring assignment in JavaScript

    - by Anders Rune Jensen
    As can be seen in the Mozilla changlog for JavaScript 1.7 they have added destructuring assignment. Sadly I'm not very fond of the syntax (why write a and b twice?): var a, b; [a, b] = f(); Something like this would have been a lot better: var [a, b] = f(); That would still be backwards compatible. Python-like destructuring would not be backwards compatible. Anyway the best solution for JavaScript 1.5 that I have been able to come up with is: function assign(array, map) { var o = Object(); var i = 0; $.each(map, function(e, _) { o[e] = array[i++]; }); return o; } Which works like: var array = [1,2]; var _ = assign[array, { var1: null, var2: null }); _.var1; // prints 1 _.var2; // prints 2 But this really sucks because _ has no meaning. It's just an empty shell to store the names. But sadly it's needed because JavaScript doesn't have pointers. On the plus side you can assign default values in the case the values are not matched. Also note that this solution doesn't try to slice the array. So you can't do something like {first: 0, rest: 0}. But that could easily be done, if one wanted that behavior. What is a better solution?

    Read the article

  • Best way to display a "High Scores" Results

    - by George
    First, I would to thank everyone for all the help they provide via this website. It has gotten me to the point of almost being able to release my first iPhone app! Okay, so the last part I have is this: I have a game that allows users to save their high scores. I update a plist file which contains the users Name, Level, and score. Now I want to create a screen that will display the top 20 high scores. What would be the best way to do this? At first I thought possibly creating an HTML file with this info but am not even sure if that is possible. I would need to read the plist file, and then write it out as HTML. Is this possible? To write a file out as HTML? Or an even better question, is there a better way? Thanks in advance for any and all help! Geo...

    Read the article

  • Good mobile oriented GWT widget library alternatives

    - by Michael Donohue
    I've been developing a travel planning site - tripgrep.com - which is built on appengine, GWT and smartgwt, among other technologies. It is still early days, and the site is now working well on my development environment, which is either a windows or mac computer. However, I am frequently talking up the website to my friends when we are at a bar or other venue, so I am standing there while they try to access the site via an iPhone, Android or Blackberry - I've witnessed all three. It has been painfully obvious that the browser based frontend takes a long time to download on a mobile device. I am pretty sure this is because of the javascript download for SmartGWT. So, I would like to look at alternatives to SmartGWT. What I like about SmartGWT is that it has a reasonable look and feel out of the box - I don't need to learn any design or css and it has an office application look. This is considerably better than the GWT built-in widgets, which just get a blue border. The better look-and-feel is why I went with SmartGWT early on. However, the slow load times are killing me on these mobile demos. So now I want a fast loading widget alternative that has good look-and-feel out of the box. The features I care about are: tabs, good form layout, Google maps API integration, grid data viewing. If those are all available in a library that loads quickly on a mobile device, then that's the library I want.

    Read the article

  • GUI for generating XML

    - by Kenston
    Hello. Does anybody know of GUIs for generating XMLs? This means the user will not be presented with an IDE with support for XML for him to type XML codes. This would be helpful for non-technical people using the system. I know this sounds easy, given many libraries that can help in generating XMLs. The issue here is that the schema is really that flexible rather than being straightforward like representing books in a library with their properties. Imagine HTML, where we can create font tags inside a body, a table, a div, or nested even within itself. The solution is a WYSIWYG application that allows user to generate html codes (XML). However, that is good for XML applied in webpages since they involve visual aspect and design. My application of XML would focus on representing some conceptual and computational definitions, much like sql-like syntax, but more than that. I'm actually after the approach or previous works done or tried, although having a library/working framework for that would be better. Btw, I'm using Java for this project. Currently, I'm just thinking of presenting element tags where user will be able to drag and drop them and nest them with each other. And perhaps, assist them through a forms in inputing values for XML attributes. I'm still hoping if there are better ideas from the community. Thank you.

    Read the article

  • LINQDataSource and private columns

    - by fyjham
    Hey, I was trying to use a ListView bound to a LinqDataSource to insert to a table where I had a few columns private to the table class (Specifically password columns - only access I want to give outside the class is methods that generate the salt and encrypt the password to store it in 1 go). I gave this a few shots, but I didn't come up with anything I really liked... was wondering if anyone has a better way to do this. The methods I've found: Use the LinqDataSource inserting event and make the appropriate calls on e.NewObject. I don't really like this because it's so far removed from the actual input and there's no simple way to hold the password in the meantime other than a class variable set during the ListView's inserting event (Which works, but seems a little dodgy). Open up these properties and just ask everyone to use the appropriate static methods for encoding the passwords they pass in. I don't really like this cause I'd prefer that class to enforce data integrity rather than relying on all calling code doing it properly... I'm currently going with option #1, but I don't really like passing values between events using class variables like that (It just seems unstructured... even though I can guarantee the events will happen in the right order). Does anyone know a better way, or alternatively am I being too pedantic and one of the methods above is actually the right way to go? Thanks

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Best practice - When to evaluate conditionals of function execution

    - by Tesserex
    If I have a function called from a few places, and it requires some condition to be met for anything it does to execute, where should that condition be checked? In my case, it's for drawing - if the mouse button is held down, then execute the drawing logic (this is being done in the mouse movement handler for when you drag.) Option one says put it in the function so that it's guaranteed to be checked. Abstracted, if you will. public function Foo() { DoThing(); } private function DoThing() { if (!condition) return; // do stuff } The problem I have with this is that when reading the code of Foo, which may be far away from DoThing, it looks like a bug. The first thought is that the condition isn't being checked. Option two, then, is to check before calling. public function Foo() { if (condition) DoThing(); } This reads better, but now you have to worry about checking from everywhere you call it. Option three is to rename the function to be more descriptive. public function Foo() { DoThingOnlyIfCondition(); } private function DoThingOnlyIfCondition() { if (!condition) return; // do stuff } Is this the "correct" solution? Or is this going a bit too far? I feel like if everything were like this function names would start to duplicate their code. About this being subjective: of course it is, and there may not be a right answer, but I think it's still perfectly at home here. Getting advice from better programmers than I is the second best way to learn. Subjective questions are exactly the kind of thing Google can't answer.

    Read the article

  • Is It Incorrect to Make Domain Objects Aware of The Data Access Layer?

    - by Noah Goodrich
    I am currently working on rewriting an application to use Data Mappers that completely abstract the database from the Domain layer. However, I am now wondering which is the better approach to handling relationships between Domain objects: Call the necessary find() method from the related data mapper directly within the domain object Write the relationship logic into the native data mapper (which is what the examples tend to do in PoEAA) and then call the native data mapper function within the domain object. Either it seems to me that in order to preserve the 'Fat Model, Skinny Controller' mantra, the domain objects have to be aware of the data mappers (whether it be their own or that they have access to the other mappers in the system). Additionally it seems that Option 2 unnecessarily complicates the data access layer as it creates table access logic across multiple data mappers instead of confining it to a single data mapper. So, is it incorrect to make the domain objects aware of the related data mappers and to call data mapper functions directly from the domain objects? Update: These are the only two solutions that I can envision to handle the issue of relations between domain objects. Any example showing a better method would be welcome.

    Read the article

  • Read HTML file into email

    - by cast01
    Ive written a script to send html emails and all works well. I have the email stored in a separate HTML file which is then read in usin a while loop and fgets(). However, i want to be able to pass variables into the html. For example, in a html file i may have something like.. <body> Dear Name <br/> Thank you for your recent purhcase </body> and i read this into a string like so $file = fopen($filename, "r"); while(!feof($file)) { $html.= fgets($file, 4096); } fclose ($file); I want to be able to replace "Name" in the html file by a variable and im not entirely sure on the best way to do this. I could always make my own tag and then use regex to replace that with the name once ive read the file into the string, but im wondering if there is a better/easier method to do this. (On a side note, if anyone knows whether its better to use file_get_contents instead of multiple calls to fgets, id be interested to know)

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • What features do you miss most when switching from IntelliJ to XCode ?

    - by Ori
    Hi, I've started to use XCode several months ago, after using IntelliJ for several years, and there are quite a few features that I really miss. XCode is not that bad, but it is lacking some basic stuff. To trigger the discussion, here are some of the features that I miss most, who knows maybe someone from Apple will bump into this post and steal some Ideas :) Source-level error-highlighting. The write-compile-fix cycle feels like going back in time 15 years ago to my early C days. Many errors can be spotted without having to compile and Java IDEs have been doing it for years. A decent debugger. This is a bit unfair because IntelliJ's debugger is the best I've used so far, but XCode's debugger is at least 5 years behind and Apple has a few more developers than JetBrains... Stronger re-factoring. A no-brainer I guess. XCode has some renaming capabilities (which they call re-factoring), but they are very few. Override method. This one is really amazing. XCode doesn't have an "override method" command which lets you choose the method you want to override from a super class or protocol. You need to go to the documentation or header file and start copy-pasting. Duplicate selected line(s). I've bumped into some posts that offer workarounds for this via custom key-binding, but none of them works, at least for me. Go to last edit point. Bummer! Come on Apple, this one is so easy to implement and so useful! A better open quickly feature. IntelliJ's quick find of classes/files/text is so much better... Turns out that my list goes on and on, so I'll stop here... What other features do you miss most in your transition to XCode?? Ori

    Read the article

  • I Need a Human Readable, Yet Parse-able Document Format

    - by macinjosh
    I'm working on one of those projects where there are a million better ways to accomplish what I need but I have no choice and I have to do it this way. Here it is: There is a web form, when the user fills it out and hits a submit a human readable text file is created using the form data. It looks like this: field_1: value for field one field_2: value for field two more data for field two (field two has a newline in it!) field3: some more data My problem is this: I need to parse this text file back into the web form so that the user can edit it. How could I, in a foolproof way, accomplish this? A database is not an option, I have to use these text files. My Questions: Is there a foolproof way to do this using the format in the example above? What human readable format would work better (in other words I can change the format) Human readable means that a non programmer could read it and know what is what. This project uses PHP.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >