Search Results

Search found 94227 results on 3770 pages for 'common code'.

Page 682/3770 | < Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • How do you keep SOA DRY?

    - by TaylorOtwell
    In our organization, we've shifted to a more "service oriented architecture". To give an example, let's assume we need to retrieve a "Quote" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a "Quote Retrieval Service". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things "de-coupled", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a "Quote" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a "necessary evil" of SOA?

    Read the article

  • Searching in Ruby on Rails - How do I search on each word entered and not the exact string?

    - by bgadoci
    I have built a blog application w/ ruby on rails and I am trying to implement a search feature. The blog application allows for users to tag posts. The tags are created in their own table and belong_to :post. When a tag is created, so is a record in the tag table where the name of the tag is tag_name and associated by post_id. Tags are strings. I am trying to allow a user to search for any word tag_name in any order. Here is what I mean. Lets say a particular post has a tag that is 'ruby code controller'. In my current search feature, that tag will be found if the user searches for 'ruby', 'ruby code', or 'ruby code controller'. It will not be found if the user types in 'ruby controller'. Essentially what I am saying is that I would like each word entered in the search to be searched for, not necessarily the 'string' that is entered into the search. I have been experimenting with providing multiple textfields to allow the user to type in multiple words, and also have been playing around with the code below, but can't seem to accomplish the above. I am new to ruby and rails so sorry if this is an obvious question and prior to installing a gem or plugin I thought I would check to see if there was a simple fix. Here is my code: View: /views/tags/index.html.erb <% form_tag tags_path, :method => 'get' do %> <p> <%= text_field_tag :search, params[:search], :class => "textfield-search" %> <%= submit_tag "Search", :name => nil, :class => "search-button" %> </p> <% end %> TagsController def index @tags = Tag.search(params[:search]).paginate :page => params[:page], :per_page => 5 @tagsearch = Tag.search(params[:search]) @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 100) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @tags } end end Tag Model class Tag < ActiveRecord::Base belongs_to :post validates_length_of :tag_name, :maximum=>42 validates_presence_of :tag_name def self.search(search) if search find(:all, :order => "created_at DESC", :conditions => ['tag_name LIKE ?', "%#{search}%"]) else find(:all, :order => "created_at DESC") end end end

    Read the article

  • Parse text file on click - and then display

    - by John R
    I am thinking of a methodology for rapid retrieval of code snippets. I imagine an HTML table with a setup like this: one two ... ten one oneTwo() oneTen() two twoOne() twoTen() ... ten tenOne() tenTwo() When a user clicks a function in this HTML table, a snippet of code is shown in another div tag or perhaps a popup window (I'm open to different solutions). I want to maintain only one PHP file named utitlities.php that contains a class called 'util'. This file & class will hold all the functions referenced in the above table (it is also used on various projects and is functional code). A key idea is that I do not want to update the HTML documentation everytime I write/update a new function in utilities.php. I should be able to click a function in the table and have PHP open the utilities file, parse out the apropriate function and display it in an HTML window. Questions: 1) I will be coding this in PHP and JavaScript but am wondering if similar scripts are available (for all or part) so I don't reinvent the wheel. 2) Quick & easy Ajax suggestions appreciated too (probably will use jquery, but am rusty). 3) Methodology for parsing out the functions from the utilities.php file (I'm not to good with regex).

    Read the article

  • Google Analytics Visitors drop-off for certain region of site only

    - by crmpicco
    I have an issue with the tracking on my site where I have seen a dramatic drop off of visitors to the site from a certain region. I have four regions on my site at the moment, these are UK, EU, US and RoW (Rest of the World). The UK, EU and US regions are unaffected, only the RoW region suffers this drop-off. I have included a screen shot below from my GA account, which shows this effect. My GA code, which is included on every page on the site is below. I have changed the UA account number intentionally for this example. There have been no changes made to the GA account or the tracking code in a live environment for some considerable time, but for some reason I am seeing the drop-off for this region only. In the code below I am not tracking page views on certain pages as I have event tracking setup for these pages. <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-18721873-5']); _gaq.push(['_setCookiePath', '/row/']); if ( typeof(p_page) != 'undefined') { // do nothing if user is on above pages // N.B. there are a series of conditions in this if statement checking that we are not on a particular page } else { _gaq.push(['_trackPageview']); } </script>

    Read the article

  • Question regarding Readability vs Processing Time

    - by Jordy
    I am creating a flowchart for a program with multiple sequential steps. Every step should be performed if the previous step is succesful. I use a c-based programming language so the lay-out would be something like this: METHOD 1: if(step_one_succeeded()) { if(step_two_succeeded()) { if(step_three_succeeded()) { //etc. etc. } } } If my program would have 15+ steps, the resulting code would be terribly unfriendly to read. So I changed my design and implemented a global errorcode that I keep passing by reference, make everything more readable. The resulting code would be something like this: METHOD 2: int _no_error = 0; step_one(_no_error); if(_no_error == 0) step_two(_no_error); if(_no_error == 0) step_three(_no_error); if(_no_error == 0) step_two(_no_error); The cyclomatic complexibility stays the same. Now let's say there are N number of steps. And let's assume that checking a condition is 1 clock long and performing a step doesn't take up time. The processing speed of Method1 can be anywhere between 1 and N. The processing speed of Method2 however is always equal to N-1. So Method1 will be faster most of the time. Which brings me to my question, is it bad practice to sacrifice time in order to make the code more readable? And why (not)?

    Read the article

  • Combining pathfinding with global AI objectives

    - by V_Programmer
    I'm making a turn-based strategy game using Java and LibGDX. Now I want to code the AI. I haven't written the AI code yet. I've simply designed it. The AI will have two components, one focused in tactics and resource management (create troops, determine who have strategical advantage, detect important objectives, etc) and a individual component, focused in assign the work to each unit, examine its possibilites and move the unit. Now I'm facing an important problem. The map where the action take place is a grid-based map. Each terrain has different movement cost. I read about pathfinding and I think A* is a very good option to determine a good route between two points. However, imagine I have an unit with movement = 5 (i.e, it can move 5 tiles of movement cost = 1). My tactical AI has found an objective at a distance d = 20 tiles (Manhattan distance) from my unit. My problem is the following: the unit won't be able to reach the objective in one turn. So the AI will have to store a list of position and execute them in various turns. I don't know how to solve this. PS. In my unit code, I have a list called "selectionMarks" which stores all the possible places where the unit can go in this turn. This places are calculed recursively using a "getSelectionMarks" function. Any help is appreciated :D

    Read the article

  • Supporting multiple instances of a plugin DLL with global data

    - by Bruno De Fraine
    Context: I converted a legacy standalone engine into a plugin component for a composition tool. Technically, this means that I compiled the engine code base to a C DLL which I invoke from a .NET wrapper using P/Invoke; the wrapper implements an interface defined by the composition tool. This works quite well, but now I receive the request to load multiple instances of the engine, for different projects. Since the engine keeps the project data in a set of global variables, and since the DLL with the engine code base is loaded only once, loading multiple projects means that the project data is overwritten. I can see a number of solutions, but they all have some disadvantages: You can create multiple DLLs with the same code, which are seen as different DLLs by Windows, so their code is not shared. Probably this already works if you have multiple copies of the engine DLL with different names. However, the engine is invoked from the wrapper using DllImport attributes and I think the name of the engine DLL needs to be known when compiling the wrapper. Obviously, if I have to compile different versions of the wrapper for each project, this is quite cumbersome. The engine could run as a separate process. This means that the wrapper would launch a separate process for the engine when it loads a project, and it would use some form of IPC to communicate with this process. While this is a relatively clean solution, it requires some effort to get working, I don't now which IPC technology would be best to set-up this kind of construction. There may also be a significant overhead of the communication: the engine needs to frequently exchange arrays of floating-point numbers. The engine could be adapted to support multiple projects. This means that the global variables should be put into a project structure, and every reference to the globals should be converted to a corresponding reference that is relative to a particular project. There are about 20-30 global variables, but as you can imagine, these global variables are referenced from all over the code base, so this conversion would need to be done in some automatic manner. A related problem is that you should be able to reference the "current" project structure in all places, but passing this along as an extra argument in each and every function signature is also cumbersome. Does there exist a technique (in C) to consider the current call stack and find the nearest enclosing instance of a relevant data value there? Can the stackoverflow community give some advice on these (or other) solutions?

    Read the article

  • Commands in Task-It - Part 1

    Download Source Code NOTE: To run the source code provided your will need to update to the RC (release candidate) versions of Silverlight 4 and VisualStudio 2010. In recent blog posts, like my MVVM post, I used Commands to invoke actions, like Saving a record. In this rather simplistic sample I will talk about the basics of Commands, and in my next post will get deeper into it. What is a Command? I remember the first time a UI designer used the word "command" I wasn't really sure what she was referring to. I later realized that it is just a term that is used to represent some UI control that can invoke an action, like a Button, HyperlinkButton, RadMenuItem, RadRadioButton, etc. Why should we use Commands? I'm sure you're familiar with the code behind approach of handling events. For example, if you had a Button and a RadMenuItem that ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Objective-C++ Memory Problem

    - by Stephen Furlani
    Hello, I'm having memory woes. I've got a C++ Library (Equalizer from Eyescale) and they use the Traversal Visitor Pattern to allow you to add new functionality to their classes. I've finally figured out how it works, and I've got a Visitor that just returns the properties from one of the objects. (since I don't know how they're allocated). so. My little code does this: VisitorResult AGLContextVisitor::visit( Channel* channel ) { // Search through Nodes, Pipes until we get to the right window. // Add some code to make sure we find the right one? // Not executing the following code as C++ in gdb? eq::Window* w = channel->getWindow(); OSWindow* osw = w->getOSWindow(); AGLWindow* aw = (AGLWindow *)osw; AGLContext agl_ctx = aw->getAGLContext(); this->setContext(agl_ctx); return TRAVERSE_PRUNE; } So here's the problem. eq::Window* w = channel->getWindow(); (gdb) print w 0x0 BUT If I do this: (gdb) set objc-non-blocking-mode off (gdb) print w=channel->getWindow() 0x300effb9 // an honest memory location, and sets w as verified in the Debugger window of XCode. It does the same thing for osw. I don't get it. Why would something work in (gdb) but not in the code? The file is completely a cpp file, but it seems to be running in objc++, since I need to turn blocking off. Help!? I feel like I'm missing some memory-management basic thing here, either with C++ or Obj-C. [edit] channel-getWindow() is supposed to do this: /** @return the parent window. @version 1.0 */ Window* getWindow() { return _window; } The code also executes fine if I run it from a C++-only application. [edit] No... I tried creating a simple stand-alone program since I was tired of running it as a plugin. Messy to debug. And no, it doesn't run in the C++ program either. So I'm really at a loss as to what I'm doing wrong. Thanks, -- Stephen Furlani

    Read the article

  • problems texture mapping in modern OpenGL 3.3 using GLSL #version 150

    - by RubyKing
    Hi all I'm trying to do texture mapping using Modern OpenGL and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here http://www.youtube.com/watch?v=xbzw_LMxlHw and I have everything setup best I can have my texcords in my vertex array sent up to opengl I have my fragment color set to the texture values and texel values I have my vertex sending the textures cords to texture cordinates to be used in the fragment shader I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. here is my code FRAGMENT SHADER #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void){ gl_FragColor = texture(texture, texture_coord); } VERTEX SHADER #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit here is my vertex array with texture cordinates GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f , 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y HERE IS THE ACTUAL FULL SOURCE CODE if you need to see all the code in its fullest glory here is a link to every file http://ideone.com/7kQN3 thank you for your help

    Read the article

  • deallocated memory in tableview: message sent to deallocated instance

    - by Kirn
    I tried looking up other issues but couldn't find anything to match so here goes: I'm trying to display text in the table view so I use this bit of code: // StockData is an object I created and it pulls information from Yahoo APIs based on // a stock ticker stored in NSString *heading NSArray* tickerValues = [heading componentsSeparatedByString:@" "]; StockData *chosenStock = [[StockData alloc] initWithContents:[tickerValues objectAtIndex:0]]; [chosenStock getData]; // Set up the cell... NSDictionary *tempDict = [chosenStock values]; NSArray *tempArr = [tempDict allValues]; cell.textLabel.text = [tempArr objectAtIndex:indexPath.row]; return cell; This is all under cellForRowAtIndexPath When I try to release the chosenStock object though I get this error: [CFDictionary release]: message sent to deallocated instance 0x434d3d0 Ive tried using NSZombieEnabled and Build and Analyze to detect problems but no luck thus far. Ive even gone so far as to comment bits and pieces of the code with NSLog but no luck. I'll post the code for StockData below this. As far as I can figure something is getting deallocated before I do the release but I'm not sure how. The only place I've got release in my code is under dealloc method call. Here's the StockData code: // StockData contains all stock information pulled in through Yahoo! to be displayed @implementation StockData @synthesize ticker, values; - (id) initWithContents: (NSString *)newName { if(self = [super init]){ ticker = newName; } return self; } - (void) getData { NSURL *url = [NSURL URLWithString: [NSString stringWithFormat:@"http://download.finance.yahoo.com/d/quotes.csv?s=%@&f=%@&e=.csv", ticker, @"chgvj1"]]; NSError *error; NSURLResponse *response; NSURLRequest *request = [NSURLRequest requestWithURL:url]; NSData *stockData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; if(stockData) { NSString *tempStr = [[NSString alloc] initWithData:stockData encoding:NSASCIIStringEncoding]; NSArray *receivedValuesArr = [tempStr componentsSeparatedByString:@","]; [tempStr release]; values = [NSDictionary dictionaryWithObjects:receivedValuesArr forKeys:[@"change, high, low, volume, market" componentsSeparatedByString:@", "]]; } else { NSLog(@"Connection failed: %@", error); } } - (void)dealloc { [ticker release]; [values release]; [super dealloc]; NSLog(@"Release took place fine"); } @end

    Read the article

  • How should I create a mutable, varied jtree with arbitrary/generic category nodes?

    - by Pureferret
    Please note: I don't want coding help here, I'm on Programmers for a reason. I want to improve my program planning/writing skills not (just) my understanding of Java. I'm trying to figure out how to make a tree which has an arbitrary category system, based on the skills listed for this LARP game here. My previous attempt had a bool for whether a skill was also a category. Trying to code around that was messy. Drawing out my tree I noticed that only my 'leaves' were skills and I'd labeled the others as categories. Explanation of tree: The Tree is 'born' with a set of hard coded highest level categories (Weapons, Physical and Mental, Medical etc.). Fro mthis the user needs to be able to add a skill. Ultimately they want to add 'One-handed Sword Specialisation' for instance. To do so you'd ideally click 'add' with Weapons selected and then select One-handed from a combobox, then click add again and enter a name in a text field. Then click add again to add a 'level' or 'tier' first proficiency, then specialisation. Of course if you want to buy a different skill it's completely different, which is what I'm having trouble getting my head around let alone programming in. What is a good system for describing this sort of tree in code? All the other JTree examples I've seen have some predictable pattern, and I don't want to have to code this all in 'literals'. Should I be using abstract classes? Interfaces? How can I make this sort of cluster of objects extensible when I add in other skills not listed above that behave differently? If there is not a good system to use, if there a good process for working out how to do this sort of thing?

    Read the article

  • How to fix out the error dpkg: error processing colord (--configure):

    - by ranjitpradhan
    I have upgrade my ubuntu from 11.10 to 12.04. at last i can found that when i tries to install some packages it shows a error. after reading some blog i tried to fix that error by "sudo dpkg --configure -a". but when i run this command it show another error this Setting up colord (0.1.16-2) ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /var/lib/colord -g colord -s /bin/false -u 115 colord' returned error code 1. Exiting. dpkg: error processing colord (--configure): subprocess installed post-installation script returned error exit status 1 Setting up whoopsie (0.1.32) ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /nonexistent -g whoopsie -s /bin/false -u 115 whoopsie' returned error code 1. Exiting. dpkg: error processing whoopsie (--configure): subprocess installed post-installation script returned error exit status 1 Setting up lightdm (1.2.1-0ubuntu1) ... Adding system user `lightdm' (UID 115) ... Adding new user `lightdm' (UID 115) with group `lightdm' ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /var/lib/lightdm -g lightdm -s /bin/false -u 115 lightdm' returned error code 1. Exiting. dpkg: error processing lightdm (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of ubuntu-desktop: ubuntu-desktop depends on lightdm; however: Package lightdm is not configured yet. dpkg: error processing ubuntu-desktop (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: colord whoopsie lightdm ubuntu-desktop what can i do now ?

    Read the article

  • Why do I have an error when adding states in slick?

    - by SystemNetworks
    When I was going to create another state I had an error. This is my code: public static final int play2 = 3; and public Game(String gamename){ this.addState(new mission(play2)); } and public void initStatesList(GameContainer gc) throws SlickException{ this.getState(play2).init(gc, this); } I have an error in the addState. above the above code. I don't know where is the problem. But if you want the whole code it is here: package javagame; import org.newdawn.slick.*; import org.newdawn.slick.state.*; public class Game extends StateBasedGame{ public static final String gamename = "NET FRONT"; public static final int menu = 0; public static final int play = 1; public static final int train = 2; public static final int play2 = 3; public Game(String gamename){ super(gamename); this.addState(new Menu(menu)); this.addState(new Play(play)); this.addState(new train(train)); this.addState(new mission(play2)); } public void initStatesList(GameContainer gc) throws SlickException{ this.getState(menu).init(gc, this); this.getState(play).init(gc, this); this.getState(train).init(gc, this); this.enterState(menu); this.getState(play2).init(gc, this); } public static void main(String[] args) { try{ AppGameContainer app =new AppGameContainer(new Game(gamename)); app.setDisplayMode(1500, 1000, false); app.start(); }catch(SlickException e){ e.printStackTrace(); } } } //SYSTEM NETWORKS(C) 2012 NET FRONT

    Read the article

  • Forking a GPL dual licensed software with business owned copyrights

    - by Eric
    After receiving some threats of the copyrights holder of a dual licensed software(GPL2 and commercial) to buy the commercial version for projects in production, I am thinking to make a fork. In a case of GPL2 and commercially dual licensed with business owned copyrights software, is forking the GPL2 version an option? Also, is forking a good way to deal with such cases? Background information The software is a web CMS released under 2 versions a GPL2 free open source edition and a commercial edition including technical support and extra functionality. The problem is that now, basing their argumentation on the "distribution" definition of the GPL2, the company holding the copyrights argue that delivering the software and some extensions to a client is considered as a "distribution". And that such a "distribution" falls under the GPL2 obligation to release the custom made extension code. Custom made extensions are mainly designs, templates and very specific functionality. Basically they give me 3 choices: Buying the commercial licensed edition for projects based on the GPL in production, Deleting all the projects in production based on GPL2 version, Releasing all the extensions as GPL2 code. The first 2 options are nothing realistic for finished projects. The third option could be fine, but as most of the extensions are very specific, cleaning the code to make it usable by other users means lot of works and also I am not sure the clients will appreciate to have their website designs and specific functionality released publicly. The copyrights holding company even contacted some clients directly, giving them the "choice". I know that this is a very corporate interpretation of GPL2, and a such action is nothing close to legal, but as an independent developer, I don't want to take the risk to get involved in some long and tiring legal procedures. PS. This question was first asked on Stack Overflow where it felt out of the scope and closed, after reading the present site FAQ, discussing about software licensing seems fine.

    Read the article

  • dynamic module creation

    - by intuited
    I'd like to dynamically create a module from a dictionary, and I'm wondering if adding an element to sys.modules is really the best way to do this. EG context = { a: 1, b: 2 } import types test_context_module = types.ModuleType('TestContext', 'Module created to provide a context for tests') test_context_module.__dict__.update(context) import sys sys.modules['TestContext'] = test_context_module My immediate goal in this regard is to be able to provide a context for timing test execution: import timeit timeit.Timer('a + b', 'from TestContext import *') It seems that there are other ways to do this, since the Timer constructor takes objects as well as strings. I'm still interested in learning how to do this though, since a) it has other potential applications; and b) I'm not sure exactly how to use objects with the Timer constructor; doing so may prove to be less appropriate than this approach in some circumstances. EDITS/REVELATIONS/PHOOEYS/EUREKAE: I've realized that the example code relating to running timing tests won't actually work, because import * only works at the module level, and the context in which that statement is executed is that of a function in the testit module. In other words, the globals dictionary used when executing that code is that of main, since that's where I was when I wrote the code in the interactive shell. So that rationale for figuring this out is a bit botched, but it's still a valid question. I've discovered that the code run in the first set of examples has the undesirable effect that the namespace in which the newly created module's code executes is that of the module in which it was declared, not its own module. This is like way weird, and could lead to all sorts of unexpected rattlesnakeic sketchiness. So I'm pretty sure that this is not how this sort of thing is meant to be done, if it is in fact something that the Guido doth shine upon. The similar-but-subtly-different case of dynamically loading a module from a file that is not in python's include path is quite easily accomplished using imp.load_source('NewModuleName', 'path/to/module/module_to_load.py'). This does load the module into sys.modules. However this doesn't really answer my question, because really, what if you're running python on an embedded platform with no filesystem? I'm battling a considerable case of information overload at the moment, so I could be mistaken, but there doesn't seem to be anything in the imp module that's capable of this. But the question, essentially, at this point is how to set the global (ie module) context for an object. Maybe I should ask that more specifically? And at a larger scope, how to get Python to do this while shoehorning objects into a given module?

    Read the article

  • Where should "display functions" live in an MVC web app?

    - by User
    I'm using the Yii Framework which is an MVC php framework that is pretty similar to your standard web-based MVC framework. I want to display the related data from a many-to-many table as a list of strings in my view. Assuming a table schema like: tag { id, name } post { id, title, content, date } post_tag { post_id, tag_id } A post will display like: Date: 9/27/2012 Title: Some Title Content: blah blah blah... Tags: Smart Funny Cool Informative I can achieve this by doing something like this in my Post view: <?php echo join(' ', array_map(function($tag) { return $tag->name; }, $model->tags)); ?> (where $model->tags is an array of Tag objects associated with my model) My questions are: Is this amount of code/logic okay in the view? (Personally I think I'd rather just reference a property or call a single function.) If not, where should this code live? In the model? the controller? a helper? Potentially I may want to use in in other views as well. Ultimately I think its purely a display issue which would make me think it should be in the view, but then I have to repeat the code in any view I want to use it in.

    Read the article

  • Debugger for file I/O development?

    - by datenwolf
    Okay, the question title may be a bit cryptic. But it aptly describes what I'm looking for: I think every experienced coder went through this numerous times: You get a binary file format specification, you implement the reader for it, and… nothing works like expected. So you run your code in the debugger, go execute through the code line by line, every header field is read in seemingly correct, but when it comes to the bulk data, offset and indices no longer match up. What would really help in this situation was a binary file viewer, that shows you the progress of your file pointer, as you step through the code, and ideally would also highlight all memory maps. Then you could see the context of the current I/O operations, most notably those darn "off-by-one" mistakes, which are even more annoying when reading a file. Implementing such a debugger should not be too hard. traces on the process' file descriptors/handles and triggers on the I/O functions, to update the display. Only: I don't know of such a kind of debugger to exist. Do I just lack knowledge about the existance of such a tool, or is there really no such thing?

    Read the article

  • protected abstract override Foo(); &ndash; er... what?

    - by Muljadi Budiman
    A couple of weeks back, a co-worker was pondering a situation he was facing.  He was looking at the following class hierarchy: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { } class FirstConcrete : SecondaryBase { } class SecondConcrete : SecondaryBase { } Basically, the first 2 classes are abstract classes, but the OriginalBase class has Test implemented as a virtual method.  What he needed was to force concrete class implementations to provide a proper body for the Test method, but he can’t do mark the method as abstract since it is already implemented in the OriginalBase class. One way to solve this is to hide the original implementation and then force further derived classes to properly implemented another method that will replace it.  The code will look like the following: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { protected sealed override void Test() { Test2(); } protected abstract void Test2(); } class FirstConcrete : SecondaryBase { // Have to override Test2 here } class SecondConcrete : SecondaryBase { // Have to override Test2 here } With the above code, SecondaryBase class will seal the Test method so it can no longer be overridden.  Then it also made an abstract method Test2 available, which will force the concrete classes to override and provide the proper implementation.  Calling Test will properly call the proper Test2 implementation in each respective concrete classes. I was wondering if there’s a way to tell the compiler to treat the Test method in SecondaryBase as abstract, and apparently you can, by combining the abstract and override keywords.  The code looks like the following: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { protected abstract override void Test(); } class FirstConcrete : SecondaryBase { // Have to override Test here } class SecondConcrete : SecondaryBase { // Have to override Test here } The method signature makes it look a bit funky, because most people will treat the override keyword to mean you then need to provide the implementation as well, but the effect is exactly as we desired.  The concepts are still valid: you’re overriding the Test method from its original implementation in the OriginalBase class, but you don’t want to implement it, rather you want to classes that derive from SecondaryBase to provide the proper implementation, so you also make it as an abstract method. I don’t think I’ve ever seen this before in the wild, so it was pretty neat to find that the compiler does support this case.

    Read the article

  • Php plugin to replace '->' with '.' as the member access operator ? Or even better: alternative synt

    - by Gigi
    Present day usable solution: Note that if you use an ide or an advanced editor, you could make a code template, or record a macro that inserts '->' when you press Ctrl and '.' or something. Netbeans has macros, and I have recorded a macro for this, and I like it a lot :) (just click the red circle toolbar button (start record macro),then type -> into the editor (thats all the macro will do, insert the arrow into the editor), then click the gray square (stop record macro) and assign the 'Ctrl dot' shortcut to it, or whatever shortcut you like) The php plugin: The php plugin, would also have to have a different string concatenation operator than the dot. Maybe a double dot ? Yea... why not. All it has to do is set an activation tag so that it doesnt replace / interpreter '.' as '->' for old scripts and scripts that dont intent do use this. Something like this: <php+ $obj.i = 5 ?> (notice the modified '<?php' tag to '<?php+' ) This way it wouldnt break old code. (and you can just add the '<?php+' code template to your editor and then type 'php tab' (for netbeans) and it would insert '<?php+' ) With the alternative syntax method you could even have old and new syntax cohabitating on the same page like this (I am illustrating this to show the great compatibility of this method, not because you would want to do this): <?php+ $obj.i = 5; ?> <?php $obj->str = 'a' . 'b'; ?> You could change the tag to something more explanatory, in case somebody who doesnt know about the plugin reads the script and thinks its a syntax error <?php-dot.com $obj.i = 5; ?> This is easy because most editors have code templates, so its easy to assign a shortcut to it. And whoever doesnt want the dot replacement, doesnt have to use it. These are NOT ultimate solutions, they are ONLY examples to show that solutions exist, and that arguments against replacing '->' with '.' are only excuses. (Just admit you like the arrow, its ok : ) With this potential method, nobody who doesnt want to use it would have to use it, and it wouldnt break old code. And if other problems (ahem... excuses) arise, they could be fixed too. So who can, and who will do such a thing ?

    Read the article

  • Git repo: Unravelling my mess into tidy branches

    - by Martin
    I wanted to play with a project, so git cloned it and, following its instructions, created a local branch for my configuration (I guess so that users can merge updates back). At first I was just tweaking to suit my preferences, so I didn't bother with any further branching, but now I have some code that might be useful to someone else, but with my passwords, etc in the same branch. Effectively, I have one big branch from which I'd like to have: Postgres backend (default) but with some new code I've added MySQL backend (the biggest change I've made) with that same new code My settings: I can't git ignore the settings file because I occasionally have to add sections for new functionality, but I need to keep my personal settings out of the public branches! I guess this would work best as a local-only branch. Dev branches, which I would branch from the MySQL. Starting from scratch, I think I could figure out how to branch/merge the various updates, but is there an easy way to walk through the existing repo and choose which commits to apply to which branch? Or possibly create a branch from a point upstream then merge back, excluding certain commits?

    Read the article

  • Mercurial local repository backup

    - by Ricket
    I'm a big fan of backing things up. I keep my important school essays and such in a folder of my Dropbox. I make sure that all of my photos are duplicated to an external drive. I have a home server where I keep important files mirrored across two drives inside the server (like a software RAID 1). So for my code, I have always used Subversion to back it up. I keep the trunk folder with a stable copy of my application, but then I create a branch named with my username, and inside there is my working copy. I make very few changes between commits to that branch, with the understanding that the code in there is my backup. Now I'm looking into Mercurial, and I must admit I haven't truly used it yet so I may have this all wrong. But it seems to me that you have a server-side repository, and then you clone it to a working directory in the form of a local repository. Then as you work on something, you make commits to that local repository, and when things are in a state to be shared with others, you hg push to the parent repository on the server. Between pushes of stable, tested, bug-free code, where is the backup? After doing some thinking, I've come to the conclusion that it is not meant for backup purposes and it assumes you've handled that on your own. I guess I need to keep my Mercurial local repositories in my dropbox or some other backed-up location, since my in-progress code is not pushed to the server. Is this pretty much it, or have I missed something? If you use Mercurial, how do you backup your local repositories? If you had turned on your computer this morning and your hard drive went up in flames (or, more likely, the read head went bad, or the OS corrupted itself, ...), what would be lost? If you spent the past week developing a module, writing test cases for it, documenting and commenting it, and then a virus wipes your local repository away, isn't that the only copy? So then on the flip side, do you create a remote repository for every local repository and push to it all the time? How do you find a balance? How do you ensure your code is backed up? Where is the line between using Mercurial as backup, and using a local filesystem backup utility to keep your local repositories safe?

    Read the article

  • Good books or tutorials on building projects without an IDE?

    - by CodexArcanum
    While I'm certainly under no illusions that building software without an IDE is possible, I don't actually know much about doing it well. I've been using graphical tools like Visual Studio or Code::Blocks so long that I'm pretty well lost without them. And that really stinks when I want to change environments or languages. I couldn't really do anything in D until someone made a Visual Studio plugin, and now that I'm trying to do more development on Mac, I can't use D again because the XCode plugins don't work. I'm sick of being lost when I see a .make file and having no idea what I'm supposed to do with a folder full of source files. People can't be compiling them one by one using the console and then linking them one by one. You'd spend more time typing file names than code. So what are the automation and productivity tools of the non-IDE user? How do you manage a project when you're writing all the code in emacs or vim or nano or whatever? I would love it if there was a book or a guide online that spells some of this out.

    Read the article

  • XNA 4.0 - Purple/Pink Tint Over All Sprites After Viewing in FullScreen

    - by D. Dubya
    I'm a noob to the game dev world and recently finished the 2D XNA tutorial from http://www.pluralsight.com. Everything was perfect until I decided to try the game in Fullscreen mode. The following code was added to the Game1 constructor. graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 480; graphics.IsFullScreen = true; As soon as it launched in Fullscreen, I noticed that the entire game was tinted. None of the colours were appearing as they should. That code was removed, the game then launched in the 800x480 window, however the tint remained. I commented out all my Draw code so that all that was left was GraphicsDevice.Clear(Color.CornflowerBlue); //spriteBatch.Begin(); //gameState.Draw(spriteBatch, false); //spriteBatch.End(); //spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive); //gameState.Draw(spriteBatch, true); //spriteBatch.End(); base.Draw(gameTime); The result was an empty window that was tinted Purple, not Blue. I changed the GraphicsDevice.Clear colour to Color.White and the window was tinted Pink. Color.Transparent gave a Black window. Even tried rebooting my PC but the 'tint' still remains. I'm at a loss here.

    Read the article

< Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >