Search Results

Search found 10438 results on 418 pages for 'power architecture'.

Page 372/418 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Multiple, Simultaneous Factories and Protocols in Twisted: Same Service, Different Ports

    - by RichardCroasher
    Greetings, Forum. I'm working on a program in Python that uses Twisted to manage networking. The basis of this program is a TCP service that is to listen for connections on multiple ports. However, instead of using one Twisted factory to handle a protocol object for each port, I am trying to use a separate factory for each port. The reason for this is to force a separation among the groups of clients connecting to the different ports. Unfortunately, it appears that this architecture isn't quite working: clients that connect to one port appear to be available among all the factories (e.g., the protocol class used by each factory includes a 'self.factory.clients.append (self)' statement...instead of adding a given client to just the factory for a particular port, the client is added to all factories), and whenever I shutdown service on one port the listeners on all ports also stop. I've been working with Twisted for a short while, and fear I simply don't fully understand how its factory classes are managed. My question is: is it simply not possible to have multiple, simultaneous instances of the same factory and same protocol in use across different ports (without these instances stepping on each other's toes)?

    Read the article

  • GRID is not properly rendered in ExtJS 4 by using Store

    - by user548543
    Here is the Src code for HTML file <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4 /strict.dtd"> <html> <head> <title>MVC Architecture</title> <link rel="stylesheet" type="text/css" href="/bh/extjs/resources/css/ext-all.css" /> <script type="text/javascript" src="extjs/ext-debug.js"></script> <script type="text/javascript" src="Main.js"></script> </head> <body> </body> </html> File path: /bh/Main.js [Main File] Ext.require('Ext.container.Viewport'); Ext.application({ name: 'App', appFolder: 'app', controllers: ['UserController'], launch: function() { Ext.create('Ext.container.Viewport', { layout: 'border', items: [ { xtype: 'userList' } ] }); } }); File path: /app/controller/UserController.js [Controller] Ext.define('App.controller.UserController',{ extend: 'Ext.app.Controller', stores: ['UserStore'], models:['UserModel'], views:['user.UserList'], init: function() { this.getUserStoreStore().load(); } }); File path: /app/store/UserStore.js Ext.define('App.store.UserStore', { extend: 'Ext.data.Store', model: 'App.model.UserModel', proxy: { type: 'ajax', url: 'app/data/contact.json' } }); File path: /app/model/UserModel.js [Model] Ext.define('App.model.UserModel',{ extends:'Ext.data.Model', fields:[ {name: 'name', type: 'string'}, {name: 'age', type: 'string'}, {name: 'phone', type: 'string'}, {name: 'email', type: 'string'} ] }); File path: /app/view/UserList.js [View] Ext.define('App.view.user.UserList' ,{ extend: 'Ext.grid.Panel', alias:'widget.userList', title:'Contacts', region:'center', resizable:true, initComponent: function() { this.store = 'UserStore'; this.columns = [ {text: 'Name',flex:1,sortable: true,dataIndex: 'name'}, {text: 'Age',flex:1,sortable: true,dataIndex: 'age'}, {text: 'Phone',flex:1,sortable: true,dataIndex: 'phone'}, {text: 'Email',flex:1,sortable: true,dataIndex: 'email'} ]; this.callParent(arguments); } }); In fire bug it shows the JSON response as follows: [{ "name": "Aswini", "age": "32", "phone": "555-555-5555", "email": "[email protected]" }] Why the Data has not been displayed although I have a valid json response. Please help!!!

    Read the article

  • I'm trying to install psycopg2 onto Mac OS 10.6.3; it claims it can't find "stdarg.h" but I can see

    - by cojadate
    I'm desperately trying to successfully install psycopg2 but keep running into errors. The latest one seems to involve it not being to find "stdarg.h" (see code below). However I can see with my own eyes that a file called stdarg.h exists at /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h (where it claims it can't find anything) so I've no idea what to do about it. I'm running Mac OS 10.6.3 and within the last few days I've made sure I have all the latest OS developer tools. I have Python 2.6.2 and PostgreSQL 8.4 if that makes any difference. python setup.py install running install running build running build_py running build_ext building 'psycopg2._psycopg' extension creating build/temp.macosx-10.3-fat-2.6 creating build/temp.macosx-10.3-fat-2.6/psycopg gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.2.1 (dt dec ext pq3)" -DPG_VERSION_HEX=0x080404 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -DHAVE_PQPROTOCOL3=1 -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -I. -I/opt/local/include/postgresql84 -I/opt/local/include/postgresql84/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.3-fat-2.6/psycopg/psycopgmodule.o In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from psycopg/psycopgmodule.c:27: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from psycopg/psycopgmodule.c:27: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory lipo: can't figure out the architecture type of: /var/folders/MQ/MQ-tWOWWG+izzuZCrAJpzk+++TI/-Tmp-//ccakFhRS.out error: command 'gcc' failed with exit status

    Read the article

  • Database design for a media server containing movies, music, tv and everything in between?

    - by user364114
    In the near future I am attempting to design a media server as a personal project. MY first consideration to get the project underway is architecture, it will certainly be web based but more specifically than that I am looking for suggestions on the database design. So far I am considering something like the following, where I am using [] to represent a table, the first text is the table name to give an idea of purpose and the items within {} would be fields of the table. Also not, fid is functional id referencing some other table. [Item {id, value/name, description, link, type}] - this could be any entity, single song or whole music album, game, movie - almost see this as a recursive relation, ie. a song is an item but an album that song is part of is also an item or for example a tv season is an item, with multiple items being tv episodes [Type {id, fid, mime type, etc}] - file type specific information - could identify how code handles streaming/sending this item to a user [Location {id, fid, path to file?}] [Users {id, username, email, password, ...? }] - user account information [UAC {id, fid, acess level}] - i almost feel its more flexible to seperate access control permissions form the user accounts themselves [ItemLog {id, fid, fid2, timestamp}] - fid for user id, and fid2 for item id - this way we know what user access what when [UserLog {id, fid, timestamp}] -both are logs for access, whether login or last item access [Quota {id, fid, cap}] - some sort of way to throttle users from queing up the entire site and letting it download ... Suggestions or comments are welcome as the hope is that this project will be a open source project once some code is laid out.

    Read the article

  • Can a standalone ruby script (windows and mac) reload and restart itself?

    - by user30997
    I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart. That's where I hit a roadblock. If I were using any sane platform, I could just do: exec('ruby', __FILE__) ...and be done. However, I did the following test: p Process.pid sleep 1 exec('ruby', __FILE__) ...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along. The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall. So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.

    Read the article

  • How to Manage CSS Explosion

    - by Jason
    I have been heavily relying on CSS for a website that I am working on (currently, everything is done as property values within each tag on the website and I'm trying to get away from that to make updates significantly easier). The problem I am running into, is I'm starting to get a bit of "CSS explosion" going on. It is becoming difficult for me to decide how to best organize and abstract data within the CSS file. For example: I am using a large number of div tags within the website (previously it was completely tables based). So I'm starting to get a lot of CSS that looks like this... div.title { background-color: Blue; color: White; text-align: center; } div.footer { /* Stuff Here */ } div.body { /* Stuff Here */ } etc. It's not too bad yet, but since I am learning here, I was wondering if recommendations could be made on how best to organize the various parts of a CSS file. What I don't want to get to is where I have a separate CSS attribute for every single thing on my website (which I have seen happen), and I always want the CSS file to be fairly intuitive. (P.S. I do realize this is a very generic, high-level question. My ultimate goal is to make it easy to use the CSS files and demonstrate their power to increase the speed of web development so other individuals that may work on this site in the future will also get into the practice of using them rather than hard-coding values everywhere.)

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Signals and Variables in VHDL (order) - Problem

    - by Morano88
    I have a signal and this signal is a bitvector (Z). The length of the bitvector depends on an input n, it is not fixed. In order to find the length, I have to do some computations. Can I define a signal after defining the variables ? It is giving me errors when I do that. It is working fine If I keep the signal before the variables (that what is showing below) .. but I don't want that .. the length of Z depends on the computations of the variables. What is the solution ? library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity BSD_Full_Comp is Generic (n:integer:=8); Port(X, Y : inout std_logic_vector(n-1 downto 0); FZ : out std_logic_vector(1 downto 0)); end BSD_Full_Comp; architecture struct of BSD_Full_Comp is Component BSD_BitComparator Port ( Ai_1 : inout STD_LOGIC; Ai_0 : inout STD_LOGIC; Bi_1 : inout STD_LOGIC; Bi_0 : inout STD_LOGIC; S1 : out STD_LOGIC; S0 : out STD_LOGIC ); END Component; Signal Z : std_logic_vector(2*n-3 downto 0); begin ass : process Variable length : integer := n; Variable pow : integer :=0 ; Variable ZS : integer :=0; begin while length /= 0 loop length := length/2; pow := pow+1; end loop; length := 2 ** pow; ZS := length - n; wait; end process; end struct;

    Read the article

  • How to restrain one's self from the overwhelming urge to rewrite everything?

    - by Scott Saad
    Setup Have you ever had the experience of going into a piece of code to make a seemingly simple change and then realizing that you've just stepped into a wasteland that deserves some serious attention? This usually gets followed up with an official FREAK OUT moment, where the overwhelming feeling of rewriting everything in sight starts to creep up. It's important to note that this bad code does not necessarily come from others as it may indeed be something we've written or contributed to in the past. Problem It's obvious that there is some serious code rot, horrible architecture, etc. that needs to be dealt with. The real problem, as it relates to this question, is that it's not the right time to rewrite the code. There could be many reasons for this: Currently in the middle of a release cycle, therefore any changes should be minimal. It's 2:00 AM in the morning, and the brain is starting to shut down. It could have seemingly adverse affects on the schedule. The rabbit hole could go much deeper than our eyes are able to see at this time. etc... Question So how should we balance the duty of continuously improving the code, while also being a responsible developer? How do we refrain from contributing to the broken window theory, while also being aware of actions and the potential recklessness they may cause? Update Great answers! For the most part, there seems to be two schools of thought: Don't resist the urge as it's a good one to have. Don't give in to the temptation as it will burn you to the ground. It would be interesting to know if more people feel any balance exists.

    Read the article

  • mprotect - how aligning to multiple of pagesize works?

    - by user299988
    Hi, I am not understanding the 'aligning allocated memory' part from the mprotect usage. I am referring to the code example given on http://linux.die.net/man/2/mprotect char *p; char c; /* Allocate a buffer; it will have the default protection of PROT_READ|PROT_WRITE. */ p = malloc(1024+PAGESIZE-1); if (!p) { perror("Couldn't malloc(1024)"); exit(errno); } /* Align to a multiple of PAGESIZE, assumed to be a power of two */ p = (char *)(((int) p + PAGESIZE-1) & ~(PAGESIZE-1)); c = p[666]; /* Read; ok */ p[666] = 42; /* Write; ok */ /* Mark the buffer read-only. */ if (mprotect(p, 1024, PROT_READ)) { perror("Couldn't mprotect"); exit(errno); } For my understanding, I tried using a PAGESIZE of 16, and 0010 as address of p. I ended up getting 0001 as the result of (((int) p + PAGESIZE-1) & ~(PAGESIZE-1)). Could you please clarify how this whole 'alignment' works? Thanks,

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • I'm making a simulated tv

    - by Jam
    I need to make a tv that shows the user the channel and the volume, and shows whether or not the television is on. I have the majority of the code made, but for some reason the channels won't switch. I'm fairly unfamiliar with how properties work, and I think that's what my problem here is. Help please. class Television(object): def __init__(self, __channel=1, volume=1, is_on=0): self.__channel=__channel self.volume=volume self.is_on=is_on def __str__(self): if self.is_on==1: print "The tv is on" print self.__channel print self.volume else: print "The television is off." def toggle_power(self): if self.is_on==1: self.is_on=0 return self.is_on if self.is_on==0: self.is_on=1 return self.is_on def get_channel(self): return channel def set_channel(self, choice): if self.is_on==1: if choice>=0 and choice<=499: channel=self.__channel else: print "Invalid channel!" else: print "The television isn't on!" channel=property(get_channel, set_channel) def raise_volume(self, up=1): if self.is_on==1: self.volume+=up if self.volume>=10: self.volume=10 print "Max volume!" else: print "The television isn't on!" def lower_volume(self, down=1): if self.is_on==1: self.volume-=down if self.volume<=0: self.volume=0 print "Muted!" else: print "The television isn't on!" def main(): tv=Television() choice=None while choice!="0": print \ """ Television 0 - Exit 1 - Toggle Power 2 - Change Channel 3 - Raise Volume 4 - Lower Volume """ choice=raw_input("Choice: ") print if choice=="0": print "Good-bye." elif choice=="1": tv.toggle_power() tv.__str__() elif choice=="2": change=raw_input("What would you like to change the channel to?") tv.set_channel(change) tv.__str__() elif choice=="3": tv.raise_volume() tv.__str__() elif choice=="4": tv.lower_volume() tv.__str__() else: print "\nSorry, but", choice, "isn't a valid choice." main() raw_input("Press enter to exit.")

    Read the article

  • Compiled Haskell libraries with FFI imports are invalid when imported into GHCI

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. I'm using sys_siglist because it's present in a standard library on my system, but I don't believe the actual storage used matters (I discovered this while writing a binding to libidn). If it helps, sys_siglist is defined in <signal.h> as: extern __const char *__const sys_siglist[_NSIG]; I thought this type might be the problem, so I also tried wrapping it in a plain C procedure: #include<stdio.h> const char **test_ffi_import() { printf("C think sys_siglist = %X\n", sys_siglist); return sys_siglist; } However, importing that doesn't change the result, and the printf() call prints the same pointer value as show siglist_a. My suspicion is that it's something to do with static and dynamic library loading. Update: somebody in #haskell suggested this might be 64-bit specific; if anybody tries to reproduce it, can you mention your architecture and whether it worked in a comment? Code as follows: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • Using DTOs and BOs

    - by ryanzec
    One area of question for me about DTOs/BOs is about when to pass/return the DTOs and when to pass/return the BOs. My gut reaction tells me to always map NHibernate to the DTOs, not BOs, and always pass/return the DTOs. Then whenever I needed to perform business logic, I would convert my DTO into a BO. The way I would do this is that my BO would have a have a constructor that takes a parameter that is the type of my interface (that defines the required fields/properties) that both my DTO and BO implement as the only argument. Then I would be able to create my BO by passing it the DTO in the constructor (since both with implement the same interface, they both with have the same properties) and then be able to perform my business logic with that BO. I would then also have a way to convert a BO to a DTO. However, I have also seen where people seem to only work with BOs and only work with DTOs in the background where to the user, it looks like there are no DTOs. What benefits/downfalls are there with this architecture vs always using BO's? Should I always being passing/returning either DTOs or BOs or mix and match (seems like mixing and matching could get confusing)?

    Read the article

  • working with large sprite sheets on iphone

    - by lukya
    Hi All, I am trying to use sprite sheet animation in my application. The first POC with a small sprite sheet worked fine but as i change the sprite sheet to a bigger one, i get "check_safe_call: could not restore current frame" warning and the application quits. A quick search revealed that this problem meant my app is taking too much memory or the image is too huge in dimension. My image is 4.9 Mb and dimensions are 6720 * 10080 (oops!!). i read that iphone allows maximum 3 Mb image with dimensions up to 1024 * 1024. Also that the sprite sheet image dimensions should be a power of two. So please let me know how i can use a sprite sheet this big. One approach could be to cut the sprite sheet into many smaller sprite sheets and use them one at a time. Please suggest if you know any other/better approach to accommodate bigger sprite sheets and whether the problem with my sprite sheet is size (4.9 Mb) OR dimensions (6720 * 10080). (Just FYI, i am not trying to play a movie so using MP4 file instead is not an option for me. i need to animate the sprite sheet based on accelerometer input and i have been able to achieve that in my POC with smaller sprite sheet.) Thanks, Swapnil

    Read the article

  • Uncaught TypeError: Object [object Object] has no method 'onAdded'

    - by user3604227
    I am using ExtJS4 with Java servlets. I am following the MVC architecture for ExtJS. I am trying a simple example of displaying a border layout but it doesnt work and I get the following error in ext-all.js in the javascript console: Uncaught TypeError: Object [object Object] has no method 'onAdded' Here is my code: app.js Ext.Loader.setConfig({ enabled : true }); Ext.application({ name : 'IN', appFolder : 'app', controllers : [ 'Items' ], launch : function() { console.log('in LAUNCH-appjs'); Ext.create('Ext.container.Viewport', { items : [ { xtype : 'borderlyt' } ] }); } }); Items.js (controller) Ext.define('IN.controller.Items', { extend : 'Ext.app.Controller', views : [ 'item.Border' ], init : function() { this.control({ 'viewport > panel' : { render : this.onPanelRendered } }); }, onPanelRendered : function() { console.log('The panel was rendered'); } }); Border.js (view) Ext.define('IN.view.item.Border',{extend : 'Ext.layout.container.Border', alias : 'widget.borderlyt', title : 'Border layout' , autoShow : true, renderTo : Ext.getBody(), defaults : { split : true, layout : 'border', autoScroll : true, height : 800, width : 500 }, items : [ { region : 'north', html : "Header here..", id : 'mainHeader' }, { region : 'west', width : 140, html : "Its West..", }, { region : 'south', html : "This is my temp footer content", height : 30, margins : '0 5 5 5', bodyPadding : 2, id : 'mainFooter' }, { id : 'mainContent', collapsible : false, region : 'center', margins : '5', border : true, } ] }); The folder structure for the Webcontent is as follows: WebContent app controller Items.js model store view item Border.js ext_js resources src ext_all.js index.html app.js Can someone help me resolve this error? Thanks in advance

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Drawing only part of a texture OpenGL ES iPhone

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • Could Python's logging SMTP Handler be freezing my thread for 2 minutes?

    - by Oddthinking
    A rather confusing sequence of events happened, according to my log-file, and I am about to put a lot of the blame on the Python logger, which is a bold claim. I thought I should get some second opinions about whether what I am saying could be true. I am trying to explain why there is are several large gaps in my log file (around two minutes at a time) during stressful periods for my application when it is missing deadlines. I am using Python's logging module on a remote server, and have set-up, with a configuration file, for all logs of severity of ERROR or higher to be emailed to me. Typically, only one error will be sent at a time, but during periods of sustained problems, I might get a dozen in a minute - annoying, but nothing that should stress SMTP. I believe that, after a short spurt of such messages, the Python logging system (or perhaps the SMTP system it is sitting on) is encountering errors or congestion. The call to Python's log is then BLOCKING for two minutes, causing my thread to miss its deadlines. (I was smart enough to move the logging until after the critical path of the application - so I don't care if logging takes me a few seconds, but two minutes is far too long.) This seems like a rather awkward architecture (for both a logging system that can freeze up, and for an SMTP system (Ubuntu, sendmail) that cannot handle dozens of emails in a minute**), so this surprises me, but it exactly fits the symptoms. Has anyone had any experience with this? Can anyone describe how to stop it from blocking? ** EDIT: I actually counted. A little under 4000 short emails in two hours. So far more than I suggested, sorry. But enough to over-fill a Sendmail's buffers?

    Read the article

  • Setting MinimumSize Attribute for a Control in a TFS WorkItem

    - by Jay Yother
    Tools used: Visual Studio 2008 SP1 Team Explorer Team Foundation Server Power Tools October 2008 Release. Using the Process Editor in Visual Studio, I am attempting to set the MinimumSize attribute for a control in a WorkItem template to make the default size of the input area larger. I am setting the attribute according to this website: http://msmvps.com/blogs/vstsblog/archive/2007/07/07/undocumented-attributes-for-controlling-the-work-item-form-layout.aspx No matter what I set this attribute to it has no affect. I have tried setting the attribute with and without surrounding (). I've tried different capitalization of the attribute but no luck. I have verified that the MinimumSize attribute is being correctly set in the associated xml file. The control (HtmlFieldControl) is currently setup as the second child on a Tab Page. (The first control is also an HtmlFieldControl.) I've tried adding a group to the Tab Page such that the hierarchy is TabPage-Group-Column-Control with no success. I've also tried setting the attribute for the first control with no luck. Any idea what I am doing wrong?

    Read the article

  • Overriding Node Paths

    - by fighella
    Can a PAGE override the priority of a NODE? I want to use the power of "Pages" to override my links from the nodes... For example: A nodes link is /content/202/hello-world I want to use my Panels to use the URL to create a "PAGE"... The panels can use the arguments from the URL to create a pretty cool page around the content of a node... So the PAGE path I've made is /content/%argument1/%title... I need the links created to the node from the node itself to go to this panel page and not to the content created by the node on its own... so i make the path alias do the same as the PAGE settings... /content/nid/title... A hand typed in link to this PAGE works fine when I dont have this as the path alias... It does exactly what I need... but as soon as I make this the Path Alias, it's like it goes to the single NODE before it goes to the PAGE... Anyone have any clues... Is there an order which Drupal looks for the correct URL? Im sure I have done this before. JD

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software? (90% of the projects are .net C#)

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >