Search Results

Search found 9436 results on 378 pages for 'component architecture'.

Page 336/378 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • How do I enforce the order of qmake library dependencies?

    - by James Oltmans
    I'm getting a lot of errors because qmake is improperly ordering the boost libraries I'm using. Here's what .pro file looks like QT += core gui TARGET = MyTarget TEMPLATE = app CONFIG += no_keywords \ link_pkgconfig SOURCES += file1.cpp \ file2.cpp \ file3.cpp PKGCONFIG += my_package \ sqlite3 LIBS += -lsqlite3 \ -lboost_signals \ -lboost_date_time HEADERS += file1.h\ file2.h\ file3.h FORMS += mainwindow.ui RESOURCES += Resources/resources.qrc This produces the following command: g++ -Wl,-O1 -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/lib/x86_64-linux-gnu -lboost_signals -lboost_date_time -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lQtGui -lQtCore Note: mylib1 and mylib2 are statically compiled by another project, placed in /usr/local/lib with an appropriate pkg-config .pc file pointing there. The .pro file references them via my_package in PKGCONFIG. The problem is not with pkg-config's output but with Qt's ordering. Here's the .pc file: prefix=/usr/local exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: my_package Description: My component package Version: 0.1 URL: http://example.com Libs: -L${libdir} -lmylib1 -lmylib2 Cflags: -I${includedir}/my_package/ The linking stage fails spectacularly as mylib1 and mylib2 come up with a lot of undefined references to boost libraries that both the app and mylib1 and mylib2 are using. We have another build method using scons and it properly orders things for the linker. It's build command order is below. g++ -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lboost_signals -lboost_date_time -lQtGui -lQtCore Note that the principle difference is the order of the boost libs. Scons puts them at the end just before QtGui and QtCore while qmake puts them first. The other differences in the compile commands are unimportant as I have hand modified the qmake produced make file and the simple reordering fixed the problem. So my question is, how do I enforce the right order in my .pro file despite what qmake thinks they should be?

    Read the article

  • objective-c having issues with an NSDictioary object

    - by Mark
    I have a simple iPhone app that Im learning and I want to have an instance variable called urlLists which is an NSDictionary I have declared it like so: @interface MyViewController : UIViewController <UIPickerViewDataSource, UIPickerViewDelegate>{ IBOutlet UIPickerView *pickerView; NSMutableArray *categories; NSDictionary *urlLists; } @property(retain) NSDictionary *urlLists; @end and in the implementation: @implementation MyViewController @synthesize urlLists; ... - (void)viewDidLoad { [super viewDidLoad]; categories = [[NSMutableArray alloc] init]; [categories addObject:@"Sport"]; [categories addObject:@"Entertainment"]; [categories addObject:@"Technology"]; [categories addObject:@"Political"]; NSArray *objects = [NSArray arrayWithObjects:@"value1", @"value2", @"value3", @"value4", nil]; urlLists = [NSDictionary dictionaryWithObjects:objects forKeys:categories]; for (id key in urlLists) { NSLog(@"key: %@, value: %@", key, [urlLists objectForKey:key]); } } ... @end And, this all works up to here. I have added a UIPicker to my app, and when I select one of the items, I want to Log the one picked and its related entry in my dictionary. -(void) pickerView:(UIPickerView *)thePickerView didSelectRow:(NSInteger)row inComponent:(NSInteger) component { for (id key in self.urlLists) { NSLog(@"key: %@, value: %@", key, [urlLists objectForKey:key]); } } but I get the old EXC_BAD_ACCESS error... I know Im missing something small, but what? Thanks

    Read the article

  • How can I abstract out the core functionality of several Rails applications?

    - by hornairs
    I'd like to develop a number of non-trivial Rails applications which all implement a core set of functionality but each have certain particular customizations, extensions, and aesthetic differences. How can I pull the core functionality (models, controllers, helpers, support classes, tests) common to all these systems out in such a way that updating the core will benefit every application based upon it? I've seen Rails Engines but they seem to be too detached, almost too abstracted to be built upon. I can seem them being useful for adding one component to an existing app, for example bolting on a blog engine to your existing e-commerce site. Since engines seem to be mostly self contained, it seems difficult and inconvenient to override their functionality and views while keeping DRY. I've also considered abstracting the code into a gem, but this seems a little odd. Do I make the gem depend on the Rails gems, and the define models & controllers inside it, and then subclass them in my various applications? Or do I define many modules inside the gem that I include in the different spots inside my various applications? How do I test the gem and then test the set of customizations and overridden functionality on top of it? I'm also concerned with how I'll develop the gem and the Rails apps in tandem, can I vendor a git repository of the gem into the app and push from that so I don't have to build a new gem every iteration? Also, are there private gem hosts/can I set my own gem source up? Also, any general suggestions for this kind of undertaking? Abstraction paradigms to adhere to? Required reading? Comments from the wise who have done this before? Thanks!

    Read the article

  • PowerPoint PlugIn does not read defaults from .dll.config file

    - by Nick T
    I'm working on a very simple PowerPoint plugin, and I'm quite a bit stumped. In my settings.settings file, I have configured a setting "StartPath", which references where the PowerPoint plugin will navigate to using a Browser component. After I compile the application, and run the installer generated by the Setup project, the application is installed and uses the default value in the settings file. However, if I edit the application.dll.config file, the plugin still uses the old values. How can I set things up such that the plugin references the .dll.config file and not its default settings? The code to access the settings is listed below, including the other variants I have tried: //Attempt 1 string location = MyApplication.Properties.Settings.Default.StartPath; //Attempt 2 string location = ConfigurationManager.AppSettings["StartPath"]; //Attempt 3: Configuration element is inaccessible due to its protection level string applicationName = Environment.GetCommandLineArgs()[0] + ".exe"; string exePath = System.IO.Path.Combine(Environment.CurrentDirectory, applicationName); Configuration config = ConfigurationManager.OpenExeConfiguration(exePath); string location = config.AppSettings["StartPath"];

    Read the article

  • Flex Tree with infinite parents and children

    - by Tempname
    I am working on a tree component and I am having a bit of the issue with populating the data-provider for this tree. The data that I get back from my database is a simple array of value objects. Each value object has 2 properties. ObjectID and ParentID. For parents the ParentID is null and for children the ParentID is the ObjectID of the parent. Any help with this is greatly appreciated. Essentially the tree should look something like this: Parent1 Child1 Child1 Child2 Child1 Child2 Parent2 Child1 Child2 Child3 Child1 This is the current code that I am testing with: public function setDataProvider(data:Array):void { var tree:Array = new Array(); for(var i:Number = 0; i < data.length; i++) { // do the top level array if(!data[i].parentID) { tree.push(data[i], getChildren(data[i].objectID, data)); } } function getChildren(objectID:Number, data:Array):Array { var childArr:Array = new Array(); for(var k:Number = 0; k < data.length; k++) { if(data[k].parentID == objectID) { childArr.push(data[k]); //getChildren(data[k].objectID, data); } } return childArr; } trace(ObjectUtil.toString(tree)); } Here is a cross section of my data: ObjectID ParentID 1 NULL 10 NULL 8 NULL 6 NULL 4 6 3 6 9 6 2 6 11 7 7 8 5 8

    Read the article

  • Have something loaded only when JList item is visibile

    - by elvencode
    Hello, i'm implementing a Jlist populated with a lot of elements. Each element corresponds to a image so i'd like to show a resized preview of them inside each row of the list. I've implemented a custom ImageCellRenderer extending the Jlabel and on getListCellRendererComponent i create the thumbnail if there'snt any for that element. Each row corresponds to a Page class where i store the path of the image and the icon applied to the JLabel. Each Page object is put inside a DefaultListModel to populate the JList. The render code is something like this: public Component getListCellRendererComponent( JList list, Object value, int index, boolean isSelected, boolean cellHasFocus) { Page page = (Page) value; if (page.getImgIcon() == null) { System.out.println(String.format("Creating thumbnail of %s", page.getImgFilename())); ImageIcon icon = new ImageIcon(page.getImgFilename()); int thumb_width = icon.getIconWidth() > icon.getIconHeight() ? 128 : ((icon.getIconWidth() * 128) / icon.getIconHeight()); int thumb_height = icon.getIconHeight() > icon.getIconWidth() ? 128 : ((icon.getIconHeight() * 128) / icon.getIconWidth()); icon.setImage(getScaledImage(icon.getImage(), thumb_width, thumb_height)); page.setImgIcon(icon); } setIcon(page.getImgIcon()); } I was thinking that only a certain item is visibile in the List the cell renderer is called but i'm seeing that all the thumnails are created when i add the Page object to the list model. I've tried to load the items and after set the model in the JList or set the model first and after starting appending the items but the results are the same. Is there any way to load the data only when necessary or do i need to create a custom control like a JScrollPanel with stacked items inside where i check myself the visibility of each elements? Thanks

    Read the article

  • Click behaviour - Difference in IE and FF ?!

    - by OlliD
    Hey folks, i just came to the conclusion that a project i am currently working on might have a "logical" error in functionality. Currently I'am using server technology with PHP/MySQL and JQuery. Within the page there's a normal link reference with tag <a href="contentpage?page=xxx">next step</a> The pain point now seems to be the given jquery click event on the same element. The intension was to save the (current) content of the page (- form elements) via another php script using the php session command. For any reason, IE can handle the click event of Jquery right before executing the standard html command, that reloads the current page again with the new page parameter. By using FF the behaviour is different. I assume, that FF first execute the html command and afterwards execute the javascript code which handles the click event. Therefore the resultset here is wrong respectivly empty. My question now is whether you made the same experience and how you handled / wordarrounded this problem. I'd be thankful fur any of your tips or further feedback. Maybe you also have a solution on how to rethink about the current architecture. Regards, Oliver

    Read the article

  • Proper API Design for Version Independence?

    - by Justavian
    I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API. The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps. I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas. My plan would be to essentially have three dlls. APIServer_1_0.dll - this would be the dll with all of the dependencies. APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution. APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to. I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls). While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Doctrine: Unable to execute either CROSS JOIN or SELECT FROM Table1, Table2?

    - by ropstah
    Using Doctrine I'm trying to execute either a 1. CROSS JOIN statement or 2. a SELECT FROM Table1, Table2 statement. Both seem to fail. The CROSS JOIN does execute, however the results are just wrong compared to executing in Navicat. The multiple table SELECT doesn't event execute because Doctrine automatically tries to LEFT JOIN the second table. The cross join statement (this runs, however it doesn't include the joined records where the refClass User_Setting doesn't have a value): $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u CROSS JOIN Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('u', 'User u') ->addComponent('s', 'Setting s') ->addComponent('us', 'u.User_Setting us') ->where('s.sct_auto_key = ? AND u.usr_auto_key = ?',array(1, $this->usr_auto_key)); And the select from multiple tables (this doesn't event run. It does not spot the many-many relationship between User and Setting in the first ->from() part and throws an exception: "User_Setting" with an alias of "us" in your query does not reference the parent component it is related to.): $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u, Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('u', 'User u') ->addComponent('s', 'Setting s') ->addComponent('us', 'u.User_Setting us') ->where('s.sct_auto_key = ? AND u.usr_auto_key = ?',array(1, $this->usr_auto_key));

    Read the article

  • Capturing stdout from an imported module in wxpython and sending it to a textctrl, without blocking the GUI

    - by splafe
    There are alot of very similar questions to this but I can't find one that applies specifically to what I'm trying to do. I have a simulation (written in SimPy) that I'm writing a GUI for, the main output of the simulation is text - to the console from 'print' statements. Now, I thought the simplest way would be to create a seperate module GUI.py, and import my simulation program into it: import osi_model I want all the print statements to be captured by the GUI and appear inside a Textctrl, which there's countless examples of on here, along these lines: class MyFrame(wx.Frame): def __init__(self, *args, **kwds): <general frame initialisation stuff..> redir=RedirectText(self.txtCtrl_1) sys.stdout=redir class RedirectText: def __init__(self,aWxTextCtrl): self.out=aWxTextCtrl def write(self,string): self.out.WriteText(string) I am also starting my simulation from a 'Go' button: def go_btn_click(self, event): print 'GO' self.RT = threading.Thread(target=osi_model.RunThis()) self.RT.start() This all works fine, and the output from the simulation module is captured by the TextCtrl, except the GUI locks up and becomes unresponsive - I still need it to be accessible (at the very minimum to have a 'Stop' button). I'm not sure if this is a botched attempt at creating a new thread that I've done here, but I assume a new thread will be needed at some stage in this process. People suggest using wx.CallAfter, but I'm not sure how to go about this considering the imported module doesn't know about wx, and also I can't realistically go through the entire simulation architecture and change all the print statements to wx.CallAfter, and any attempt to capture the shell from inside the imported simulation program leads to the program crashing. Does anybody have any ideas about how I can best achieve this? So all I really need is for all console text to be captured by a TextCtrl while the GUI remains responsive, and all text is solely coming from an imported module. (Also, secondary question regarding a Stop button - is it bad form to just kill the simulation thread?). Thanks, Duncan

    Read the article

  • "Detecting" and loading of "plugins" in GAE

    - by Patrick Cornelissen
    Hi! I have a "plugin like" architecture and I want to create one instance of each class that implements a dedicated interface and put these in a cache. (To have a singleton-ish effect). The plugins will be provided as jars and put into the app engine war file before the app is uploaded. I have tried to use the ClassPathScanningCandidateComponentProvider as I'm using spring anyway, but this didn't work. The provider complained that it was not able to find the HttpServletResponse class file while scanning the classpath. I can't get around this, when I add the servlet jar, then I'll get of course problems, because the same jar is also provided by the GAE. If I don't, I'm getting the error above... So I tried to add a static initialization code, but of course this doesn't work, because the class is initialized when it's instantiated for the first time. (Well I knew that but it was worth a try) The last chance I currently see is that I create a properties file with all plugin classes when the package is created, but this requires writing of a maven plugin etc. and I'd like to avoid that. Is there something that I am missing?

    Read the article

  • CQRS without using others patterns

    - by John Smith
    I would like to explain CQRS to my team of developers. I just can't figure out how to explain it in the simplest way so they can implement the pattern rapidly without any others frameworks. I've read a lot of resources including video and articles but I don't find how to implement CQRS without using others patterns like a service Bus, event sourcing pattern, domain driven design. I know the purpose of these pattern but for the first step, I don't want them to think CQRS and theses patterns must be tied together. My first idea is to say that CQRS is about separating the read part and the write part. The read part is composed only of the UI project, and DAL project. Then the write part is composed of a typical multilayer architecture: UI/BLL/DAL. Then, does CQRS say we must also have two datastore ? What about the notion of commands which reveal the user's intention, is it also something part of CQRS or DDD ? Basically, how to implement CQRS without using others patterns. I concede it's also not that clear in my mind because I've used to work with NCQRS/DDD/Event Sourcing/ServiceBus in my personal project. Thanks

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • How to Implement Rich Document Editor for iPhone

    - by benjismith
    I'm just getting started on a new iPhone/iPad development project, and I need to display a document with rich styled text (potentially with embedded images). The user will touch the document, dragging to highlight individual words or multiline text spans. When the text is highlighted, a context menu will appear, letting them change the color of highlighting or add margin notes (or other various bits of structured metadata). If you're familiar with adding comments to a Word document (or annotating a PDF), then this is the same sort of thing. But in my case, the typical user will spend many many hours within the app, adding thousands (in some cases, tens of thousands) of small annotations to the central document. All of those bits of metadata will be stored locally awaiting synchronization with a remote web service. I've read other pieces of advice, where developers suggest creating a UIWebView control and passing it an HTML string. But that seems kind of clunky, especially with all the context-sensitivity that I want to include. Anyhow, I'm brand new to iPhone development and Objective-C, though I have ten years of software development experience, using a variety of languages on many different platforms, so I'm not worried about getting my hands dirty writing new functionality from scratch. But if anyone has experience building a similar kind of component, I'm interested in hearing strategies for enabling that kind of rich document markup and annotation.

    Read the article

  • Dimension Reduction in Categorical Data with missing values

    - by user227290
    I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions. Is there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)? This is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help. Thank you.

    Read the article

  • MVC design for archived data view

    - by Hemant Tank
    Implementation of a standard archive process in ASP.Net MVC. Backend SQL Server 2005 We've an existing web app built in MVC. We've an Entity "Claim" and it has some child entities like ClaimDetails, Files, etc... A pretty standard setup in DB. Each entity has its own table and are linked via FK. Now, we need to have an "Archive" feature in web app which will allow admin to archive a Claim and its child entities. An archived Claim shud become readonly when visited again. Here're some points on which I need your valued opinion - To keep it simple and scalable (for a few million records) for now we plan to simply add a bit field "Archived" to the Claim table in db. And change the behavior accordingly in the web app. We've a 'Manage claim' page which renders a bunch of diff views for Claim and its child entities. Now, for a readonly view we can either use the same views or have a separate set of views. What do you suggest? At controller level, we can identify archived claim and select which view to render. At model level, though it'd be great to be able to use the same model used for Manage Claim - but it might not get us the "text" of some lookup fields. For example, Claim.BrandId is rendered as a dropdown in Manage claim (requires only BrandId) but for readonly view we need 'BrandText'. Any existing ref or architecture level example would be great. Here's my prev SO post but its more about db level changes: Design a process to archive data (SQL Server 2005) Thank you.

    Read the article

  • Prototyping Qt/C++ in Python

    - by tstenner
    I want to write a C++ application with Qt, but build a prototype first using Python and then gradually replace the Python code with C++. Is this the right approach, and what tools (bindings, binding generators, IDE) should I use? Ideally, everything should be available in the Ubuntu repositories so I wouldn't have to worry about incompatible or old versions and have everything set up with a simple aptitude install. Is there any comprehensive documentation about this process or do I have to learn every single component, and if yes, which ones? Right now I have multiple choices to make: Qt Creator, because of the nice auto completion and Qt integration. Eclipse, as it offers support for both C++ and Python. Eric (haven't used it yet) Vim PySide as it's working with CMake and Boost.Python, so theoretically it will make replacing python code easier. PyQt as it's more widely used (more support) and is available as a Debian package. Edit: As I will have to deploy the program to various computers, the C++-solution would require 1-5 files (the program and some library files if I'm linking it statically), using Python I'd have to build PyQt/PySide/SIP/whatever on every platform and explain how to install Python and everything else.

    Read the article

  • Move to php in windows? Concern, hints, "please don't do!"?

    - by Daniel
    I am considering to move frome Microsoft languages to PHP (just for web dev) which has quite an interesting syntax, a perlish look (but a wider programmer base) and it allows me to reuse the web without reinventing it. I have some concerns too. I would be more than happy to gather some wisdom from stackoverflow community, (challenge to my opinions warmly welcome). Here are my doubts. Efficiency. Cgi are slow, what I am supposed to use? Fastcgi? Or what else? Efficiency + stability. Is PHP on windows really stable and a good choice in terms of performances? Database. I use very often MSSQL (I regret, i like it). Could I widely and efficiently interface PHP with MSSQL (using smartly stored pro, for example). XSLT + XML performance. I work quite a lot with XML and XSLT and I really find the MS xml parser a great software component. Are parser used in PHP fast, reliable and efficient (I am interested mainly in DOM, not SAX)? Objects. Is the PHP object programming model valid end efficient? 6 Regex. How efficient is PHP processing regexp? Many thanks for your advices.

    Read the article

  • DefaultTableCellRenderer getTableCellRendererComponent never gets called

    - by Dean Schulze
    I need to render a java.util.Date in a JTable. I've implemented a custom renderer that extends DefaultTableCellRenderer (below). I've set it as the renderer for the column, but the method getTableCellRendererComponent() never gets called. This is a pretty common problem, but none of the solutions I've seen work. public class DateCellRenderer extends DefaultTableCellRenderer { String sdfStr = "yyyy-MM-dd HH:mm:ss.SSS"; SimpleDateFormat sdf = new SimpleDateFormat(sdfStr); public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column); if (value instanceof Date) { this.setText(sdf.format((Date) value)); } else logger.info("class: " + value.getClass().getCanonicalName()); return this; } } I've installed the custom renderer (and verified that it has been installed) like this: DateCellRenderer dcr = new DateCellRenderer(); table.getColumnModel().getColumn(2).setCellRenderer(dcr); I've also tried table.setDefaultRenderer( java.util.Date.class, dcr ); Any idea why the renderer gets installed, but never called?

    Read the article

  • Implimenting Zend MVC for my existing site-first step?

    - by Joel
    Hi guys, OK-newbie question here. I'll try not to bombard SO with lots of questions-and hopefully this first one will show me the method I'll need to follow for subsequent conversions. I have a web-based calendar system that I developed, but it was coded for me procedurally (using PHP). I'm now working on learning OO and wanting to integrate this site into my localhost Zend Framework and slowly start converting parts to OO and the Zend Framework MVC process in particular. As I've said before, I understand that this will be a slow process, and when I'm done, I still probably won't have anything as OO friendly as if I had rewritten it from scratch, but I'd like to use this as a learning experience. So, I have dropped the whole site into my localhose/zend/Public folder, and everything is showing up great and linking to the database, etc. My question is-what would be the easiest first component to switch over to the MVC model? This site has a bit of everything-forms, login, authentication, some jQuery, etc. Can anyone point to a tutorial that would address what I'm trying to do? If indeed, a form would be one of the simpler things to switch, can someone walk me through those changes? Another idea is changing over all the header info, etc? Thanks for any pointers on where to start! EDIT: Also, I understand that SO is mainly for specific coding questions-I'm happy to share specific code, once I have an idea about which section to tackle first...

    Read the article

  • Best practice- How to team-split a django project while still allowing code reusal

    - by Infinity
    I know this sounds kind of vague, but please let me explain- I'm starting work on a brand new project, it will have two main components: "ACME PRODUCT" (think Gmail, Meebo, etc), and "THE SITE" (help, information, marketing stuff, promotional landing pages, etc lots of marketing-induced cruft). So basically the url /acme/* will load stuff in the uber cool ajaxy application, and every other URI will load stuff in the other site. Problem: "THE SITE" component is out of my hands, and will be handled by a consultants team that will work closely with marketing, And I and my team will work solely on the ACME PRODUCT. Question: How to set up the django project in such a way that we can have: Seperate releases. (They can push new marketing pages and functionality without having to worry about the state of our code. Maybe even separate Subversion "projects") Minimize impact (on our product) of whatever flying-unicorns-hocus-pocus the other team codes into the site. Still allow some code reusal. My main concern is that the ACME product needs to be rock solid, and therefore needs to be somewhat isolated of whatever mistakes/code bloopers the consultants make in their marketing side of the site. How have you handled this? Any ideas? Thanks!

    Read the article

  • WPF Single Selection Across Multiple ItemsControls

    - by gregsdennis
    Part of my app has a month-view calendar interface, but I'm having trouble with item selection. The interface is set up so that each of the days in the view contains a ListBox of items, much like the month view in Outlook. The problem I'm experiencing is that I need to maintain a single item selection across all of the ListBoxes. Below is a sample window that should adequately describe my situation. I need to maintain a single selection between both ListBoxes. <Window x:Class="StackOverflow.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <ListBox ItemsSource="{Binding Numbers}" SelectedItem="{Binding SelectedObject"/> <ListBox Grid.Column="1" ItemsSource="{Binding Dates}" SelectedItem="{Binding SelectedObject"/> </Grid> </Window> In this primitive example, I would expect that when the SelectedObject property of my view model gets set to an item that's not in one ListBox, the selection would be removed in that ListBox, but that doesn't happen. I understand that I can simply name each ListBox, and hook into the SelectionChanged event. I'd prefer to not have to do that with an entire month-view calendar. There has to be a better way. In a previous iteration of the app, I was able to create a SelectionManager static class with an attached property that was used to maintain selection. However, I can't use this now as the classes I'm using for my items are not DependencyObjects, and I'd really prefer not to have to create DependencyObject wrapper classes as this will considerably complicate my architecture. Thanks.

    Read the article

  • rbind.zoo doesn't seem create consistent zoo object

    - by a-or-b
    I want to rbind.zoo two zoo object together. When I was testing I came across the following issue(?)... Note: The below is an example, there is clearly no point to it apart from being illustrative. I have an zoo object, call it, 'X'. I want to break it into two parts and then rbind.zoo them together. When I compare it to the original object then all.equal gives differences. It appears that the '$class' attribute differs, but I can't see how or why. Is I make these xts objects then the all.equal works as expected. i.e. ..... X.date <- as.POSIXct(paste("2003-", rep(1:4, 4:1), "-", sample(1:28, 10, replace = TRUE), sep = "")) X <- zoo(matrix(rnorm(24), ncol = 2), X.date) a <- X[c(1:3), ] # first 3 elements b <- X[c(4:6), ] # second 3 elements c <- rbind.zoo(a, b) # rbind into an object of 6 elements d <- X[c(1:6), ] # all 6 elements all.equal(c, d) # are they equal? ~~~~ all.equal gives me the following difference: "Attributes: < Component 3: Attributes: < Length mismatch: comparison on first 1 components "

    Read the article

  • JTable cells not rendering shapes properly

    - by Andrew
    I'm trying to render my JTable cells with a subclassed JPanel and the cells should appear as coloured rectangles with a circle drawn on them. When the table displays initially everything looks OK but then when a dialog or something is displayed over the cells when it is removed the cells that have been covered do not rendered properly and the circles are broken up etc. I then have to move the scroll bar or extend the window to get them to redraw properly. The paintComponent method of the component I'm using to render the cells is below: protected void paintComponent(Graphics g) { setOpaque(true); super.paintComponent(g); Graphics2D g2d = (Graphics2D) g; GradientPaint gradientPaint = new GradientPaint(new Point2D.Double(0, 0), Color.WHITE, new Point2D.Double(0, getHeight()), paintRatingColour); g2d.setPaint(gradientPaint); g2d.fillRect(0, 0, getWidth(), getHeight()); g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g2d.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY); Rectangle clipBounds = g2d.getClipBounds(); int x = new Double(clipBounds.getWidth()).intValue() - 15; int y = (new Double(clipBounds.getHeight()).intValue() / 2) - 6; if (level != null) { g2d.setColor(iconColour); g2d.drawOval(x, y, width, height); g2d.fillOval(x, y, width, height); } }

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >