Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 755/880 | < Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >

  • Unable to get values in ftl from value stack in custom Result Type

    - by Nagadev
    Hello, I am unable retrieve value from value stack in FTL file. Here is the code. Action class holds a property called 'name' private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } public String execute(){ setName("From value stack .. "); return SUCCESS; } FTL code: ${name} Custom result Type deExecute Method Configuration configuration = new Configuration(); String templatePath = "/ftl"; ServletContext context = ServletActionContext.getServletContext(); configuration.setServletContextForTemplateLoading(context, templatePath); configuration.setObjectWrapper(new DefaultObjectWrapper()); Template template = configuration.getTemplate("sample.ftl"); OutputStreamWriter out = new OutputStreamWriter(System.out); template.process(ActionContext.getContext().getValueStack(), out); I am passing the value Stack which contains recently executed Action as well. But FTL is throwing an exception Expression name is undefined on line 1, column 3 in sample.ftl I tried with passing session instead of value stack and I could get the value in FTL. Please suggest me a way to get values from Action class to FTL from value stack. Thanks inadvance.

    Read the article

  • Working with complex objects in Prevayler commands

    - by alexantd
    The demos included in the Prevayler distribution show how to pass in a couple strings (or something simple like that) into a command constructor in order to create or update an object. The problem is that I have an object called MyObject that has a lot of fields. If I had to pass all of them into the CreateMyObject command manually, it would be a pain. So an alternative I thought of is to pass my business object itself into the command, but to hang onto a clone of it (keeping in mind that I can't store the BO directly in the command). Of course after executing this command, I would need to make sure to dispose of the original copy that I passed in. public class CreateMyObject implements TransactionWithQuery { private MyObject object; public CreateMyObject(MyObject business_obj) { this.object = (MyObject) business_obj.clone(); } public Object executeAndQuery(...) throws Exception { ... } } The Prevayler wiki says: Transactions can't carry direct object references (pointers) to business objects. This has become known as the baptism problem because it's a common beginner pitfall. Direct object references don't work because once a transaction has been serialized to the journal and then deserialized for execution its object references no longer refer to the intended objects - - any objects they may have referred to at first will have been copied by the serialization process! Therefore, a transaction must carry some kind of string or numeric identifiers for any objects it wants to refer to, and it must look up the objects when it is executed. I think by cloning the passed-in object I will be getting around the "direct object pointer" problem, but I still don't know whether or not this is a good idea...

    Read the article

  • Programming style question on how to code functions

    - by shawnjan
    Hey all! So, I was just coding a bit today, and I realized that I don't have much consistency when it comes to a coding style when programming functions. One of my main concerns is whether or not its proper to code it so that you check that the input of the user is valid OUTSIDE of the function, or just throw the values passed by the user into the function and check if the values are valid in there. Let me sketch an example: I have a function that lists hosts based on an environment, and I want to be able to split the environment into chunks of hosts. So an example of the usage is this: listhosts -e testenv -s 2 1 This will get all the hosts from the "testenv", split it up into two parts, and it is displaying part one. In my code, I have a function that you pass it in a list, and it returns a list of lists based on you parameters for splitting. BUT, before I pass it a list, I first verify the parameters in my MAIN during the getops process, so in the main I check to make sure there are no negatives passed by the user, I make sure the user didnt request to split into say, 4 parts, but asking to display part 5 (which would not be valid), etc. tl;dr: Would you check the validity of a users input the flow of you're MAIN class, or would you do a check in your function itself, and either return a valid response in the case of valid input, or return NULL in the case of invalid input? Obviously both methods work, I'm just interested to hear from experts as to which approach is better :) Thanks for any comments and suggestions you guys have!

    Read the article

  • Is there a limit for the number of files in a directory on an SD card?

    - by jamesh
    I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden $ ls | wc -w 9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touching a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene: The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.

    Read the article

  • Segmentation in Linux : Segmentation & Paging are redundant?

    - by claws
    Hello, I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand. Segmentation has been included in 80 x 86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: Memory management is simpler when all processes use the same segment register values that is, when they share the same set of linear addresses. One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures in particular have limited support for segmentation. All Linux processes running in User Mode use the same pair of segments to address instructions and data. These segments are called user code segment and user data segment , respectively. Similarly, all Linux processes running in Kernel Mode use the same pair of segments to address instructions and data: they are called kernel code segment and kernel data segment , respectively. Table 2-3 shows the values of the Segment Descriptor fields for these four crucial segments. I'm unable to understand 1st and last paragraph.

    Read the article

  • Tips on designing a .NET API for future use with F#

    - by Drew Noakes
    I'm in the process of designing a .NET API to allow developers to create RoboCup agents for the 3D simulated soccer league. I'm pretty happy with how the API work with C# code, however I would like to use this project to improve my F# skill (which is currently based on reading rather than practice). So I would like to ask what kinds of things I should consider when designing an API that is to be consumed by both C# and F# code. Some points. I make fairly heavy use of matrix and vector math. These are currently immutable classes/structs. The API currently defines a few interfaces with the consumer implements (eg: IAgent), using instances of their implementations (eg: MyAgent) to construct other API classes (eg: new Client(myAgent)). The API fires events. The API exposes a few delegate types. The API includes several enums. I'd like to release a version of the API as soon as possible and don't want to make major changes to it later if I realise it's too difficult to work with from F#. Any advice is appreciated.

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • Prototypal inheritance should save memory, right?

    - by Techpriester
    Hi Folks, I've been wondering: Using prototypes in JavaScript should be more memory efficient than attaching every member of an object directly to it for the following reasons: The prototype is just one single object. The instances hold only references to their prototype. Versus: Every instance holds a copy of all the members and methods that are defined by the constructor. I started a little experiment with this: var TestObjectFat = function() { this.number = 42; this.text = randomString(1000); } var TestObjectThin = function() { this.number = 42; } TestObjectThin.prototype.text = randomString(1000); randomString(x) just produces a, well, random String of length x. I then instantiated the objects in large quantities like this: var arr = new Array(); for (var i = 0; i < 1000; i++) { arr.push(new TestObjectFat()); // or new TestObjectThin() } ... and checked the memory usage of the browser process (Google Chrome). I know, that's not very exact... However, in both cases the memory usage went up significantly as expected (about 30MB for TestObjectFat), but the prototype variant used not much less memory (about 26MB for TestObjectThin). I also checked: The TestObjectThin instances contain the same string in their "text" property, so they are really using the property of the prototype. Now, I'm not so sure what to think about this. The prototyping doesn't seem to be the big memory saver at all. I know that prototyping is a great idea for many other reasons, but I'm specifically concerned with memory usage here. Any explanations why the prototype variant uses almost the same amount of memory? Am I missing something?

    Read the article

  • Cocoa Services Programming - How to identify whether the selected item is a folder or file wih NSFi

    - by rockybalboa
    Hi , I have some queries regarding Services menu validation . I would like to enable different services provided by my app based on whether a file or folder is selected in the Finder. I have set NSFilenamesPboardType as the send type for the services . I have gone through the - (id)validRequestorForSendType:(NSString *)sendType returnType:(NSString *)returnType method but my issue is that the validation there seems to be done based on the sendType and return type. In my case , the selected file and folder pasteboard type is the same and I cannot determine whether the selected item in the Finder is a file or folder during the validation process ( This is before the actual service gets invoked i.e when the services menu is being shown to the user ) ? So my question is that is there any way I can get some info about the selected item in the Finder and validate the different service menus offered by my application based on some info regarding the item rather than the basic validation of the send and return types ? I am not able to find out any manner to do so but "Folder Actions" service in Snow Leopard gets enabled only for folders so it can be done. I did a /System/Library/CoreServices/pbs -dump_pboard and it is using a NSFilenamePBoardType also yet manages to activate only for folders. Thanks in advace for any help .

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • MS Access 2003 - Failure to create MDE file: error VBA is corrupt?

    - by Justin
    Ok so this is a brand new snag I have run into. I am trying to launch a new MDE from my source MDB file, and it is locking up Access. So in my mdb, I am first compacting and repairing, and then selecting create a new mde (just as I have done many times before). It looks like it is starting the process, but never gets to where it compacts when it is done, and access is not responding. So after I force close the app, I look in the folder where I am trying to create the MDE to and I see there is a new access db1 file there. If I try to open that it gives me an error that says file not found, and then it says the Visual Basic for Applications is corrupt. The thing is, I just made a very simple adjustment to the code since last launching an mde, and after this I double and triple checked it...its not that because its just a simple open this form and close this one addition. I did however have my source mdb file on a disc that I copied to my laptop, and then tried to re link the tables to the network drive (had them linked to other tables on my local drive so that I could develop offline)?? PLEASE HELP!!!

    Read the article

  • how to make an import library

    - by user295030
    a requirement was sent to me below: API should be in the form of static library. company xxx will link the library into a third party application to prevent any possible exposure of the code(dll) could they mean an import library? An import library is a library that automates the process of loading and using a dynamic library. On Windows, this is typically done via a small static library (.lib) of the same name as the dynamic library (.dll). The static library is linked into the program at compile time, and then the functionality of the dynamic library can effectively be used as if it were a static library. this might be what they might be eluding to.....I am not sure how to make this in vs2008 . Additional facts: I have a static lib that i use in my current application. Now, I have to convert my app that uses that static lib into an import lib so that they can use a third party prog to access the API's they providede me which in turn will use that static lib i am using. I hope I am clearly explaining this. I am just not sure how to go about it in vs2008. I am looking for specific steps to do this. I already have the coding done. Just need to convert it into the form they are asking and I have to provide the API they want. Other than that then I need to create a test prog which will act as that third party prog so I can make sure my import library works.

    Read the article

  • Global hotkey capture in VB.net

    - by ggonsalv
    I want to have my app which is minimized to capture data selected in another app's window when the hot key is pressed. My app definitely doesn't have the focus. Additionally when the hot key is pressed I want to present a fading popup (Outlook style) so my app never gets focus. At a minimum I want to capture the Window name, Process ID and the selected data. The app which has focus is not my application? I know one option is to sniff the Clipboard, but are there any other solutions. This is to audit the rate of data-entry in to another system of which I have no control. It is a mainframe emulation client program(attachmate). The plan is complete data entry in Application X. Select a certain section of the screen in App X which is proof of data entry (transaction ID). Press the Magic Hotkey, which then 'sends' the selection to my App. From System.environment or system.Threading I can find the Windows logon. Similiarly I can also capture the time. All the data will be logged to SQL. Once Complete show Outlook style pop up saying the data entry has been logged. Any thoughts.

    Read the article

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • Poor performance using RMI-proxies with Swing components

    - by Patrick
    I'm having huge performance issues when I add RMI proxy references to a Java Swing JList-component. I'm retrieving a list of user Profiles with RMI from a server. The retrieval itself takes just a second or so, so that's acceptable under the circumstances. However, when I try to add these proxies to a JList, with the help of a custom ListModel and a CellRenderer, it takes between 30-60 seconds to add about 180 objects. Since it is a list of users' names, it's preferrable to present them alphabetically. The biggest performance hit is when I sort the elements as they get added to the ListModel. Since the list will always be sorted, I opted to use the built-in Collections.binarySearch() to find the correct position for the next element to be added, and the comparator uses two methods that are defined by the Profile interface, namely getFirstName() and getLastName(). Is there any way to speed this process up, or am I simply implementing it the wrong way? Or is this a "feature" of RMI? I'd really love to be able to cache some of the data of the remote objects locally, to minimize the remote method calls.

    Read the article

  • ADO.NET Multiple simultaneous reads from an open database.

    - by Deverill
    Answer not needed - my logic was wrong and this question is invalid. Charles helped me see where I went off-tracks. Thanks I have a utility that moves data from one source to another. In the process of writing the record I check to see if it exists and do an update/insert as necessary. The difficulty I have is that as I'm writing the main record info there is a 2nd table for "custom data" that I have to check to see if it exists and do an update/insert for that as well. Example: I may be loading a pencil sharpener that may or may not exist. While I'm writing it into destination it has characteristics such as style, color, etc. and each of them may or may not exist. As written I seem to need to have 2 DataReaders open, one for the sharpener and one to check for and update color. I am new to ADO.NET, but not to programming and it's more complicated than I listed but for sanity's sake I didn't put all the details. My question is: What am I missing? You can't have 2 readers open at the same time on a connection, yet I can't close the first if I'm stepping through all products. It seems inefficient to have 2 connections, readers, etc. for this. Is there a feature of ADO.NET DBs that I'm missing? How would you do it? Thanks!

    Read the article

  • image appears larger than bounding rectangle?

    - by Kildareflare
    Hello I'm implementing a custom print preview control. One of the objects it needs to display is an image which can be several pages high and wide. I've successfully divided the image into pages and displayed them in the correct order. Each "page" is drawn within the marginbounds. (That is, each page is sized to be the same as the marginBounds.) The problem I have is that the image the page represents exceeds the bottom margin. However if I draw a rectangle with the same dimensions as the image, at the same position as the image, the rectangle matches the marginbounds. Thus when drawn to the page (in print preview and printed page) the image is larger than a rectangle drawn based on the image dimensions. I.e. The image is drawn at the same size as e.MarignBounds and is drawn at e.MarginBounds.Location yet exceeds the bottom margin. How is this? I've checked resoutions at each step of the process and they are all 96DPI. //... private List<Image> m_printImages = new List<Image>(); //... private void ImagePrintDocument_PrintPage(object sender, PrintPageEventArgs e) { PrepareImageForPrinting(e); e.Graphics.DrawImage(m_printImages[m_currentPage], new Point(e.MarginBounds.X, e.MarginBounds.Y); e.Graphics.DrawRectangle(new Pen(Color.Blue, 1.0F), new Rectangle( new Point(e.MarginBounds.X, e.MarginBounds.Y), new Size(m_printImages[m_currentPage].Width, m_printImages[m_currentPage].Height))); //rest of handler } PS Not able to show an image as free image hosting blocked here.

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • Is there a way to programatically check dependencies of an EXE?

    - by Mason Wheeler
    I've got a certain project that I build and distribute to users. I have two build configurations, Debug and Release. Debug, obviously, is for my use in debugging, but there's an additional wrinkle: the Debug configuration uses a special debugging memory manager, with a dependency on an external DLL. There's been a few times when I've accidentally built and distributed an installer package with the Debug configuration, and it's then failed to run once installed because the users don't have the special DLL. I'd like to be able to keep that from happening in the future. I know I can get the dependencies in a program by running Dependency Walker, but I'm looking for a way to do it programatically. Specifically, I have a way to run scripts while creating the installer, and I want something I can put in the installer script to check the program and see if it has a dependency on this DLL, and if so, cause the installer-creation process to fail with an error. I know how to create a simple CLI program that would take two filenames as parameters, and could run a DependsOn function and create output based on the result of it, but I don't know what to put in the DependsOn function. Does anyone know how I'd go about writing it?

    Read the article

  • php sessions in database only writing part of information to the table...

    - by Ronedog
    I'm having difficulty figuring out what's going on here, hoping some one can help me out. I have been using php, mysql storing my session information in the database. The app is only running on localhost, vista. In the php.ini file I commented out the "session.save_handler = files" line and am using a php class to handle the session writes/reads, etc. My login process is this: Submit login credentials via login.php. login.php calls loginprocess.php. loginprocess.php verifies user, and if valid starts a new session and adds data to the session vars, then it redirects to index.php. Here's the problem. the loginprocess.php page has a bunch of session vars that get set like $_SESSION['account_id'] = $account_id; etc. but when I go to index.php and do a var_dump($_SESSION) it just says "array() empty". However, if I do a var_dump($_SESSION) in loginprocess.php, just before the redirection line header("Location: ../index.php"); then it shows all the data in the session variable. If I look in the database where the session information is stored, there is data in the session_id field, created_ts field, and expires field, but the session_data field has nothing inside of it and in the past this is the field where all my session data was stored. How could I be able to var_dump the session in loginprocess.php, but the data not exist in the db table, is it using some kind of caching? I cleared my cookies, etc...but no change. Why is the session_id, being written to the table, but the actual session data is not? Any ideas are appreciated. Thanks.

    Read the article

  • CSS Background image in Redmine template arbitrarily not loading

    - by Pekka
    I`m in the process of building a template for Redmine (a project management system based on Ruby on Rails.) Ruby is running on a virtual server from a Bitnami.org installation package. The OS is Windows. The template essentially consists of a styles.css file. In that file, I have the following line: #header { padding: 0px; padding-top: 48px; background-color: #62DFFF; background-image: url(../images/bkg.jpg) background-position: center bottom; background-repeat: repeat-x; height:150px; } It's a header element with a background image. The problem: This background image arbitrarily appears and disappears when reloading. Say you reload ten times in twenty seconds; the image will appear in two instances, and be missing in the 18 others. I would have put this down to server problems, but the weird thing is that when it's missing, the request for the image doesn't appear in Firebug's net tab at all. Even if it were cached, the request should be there. Raw screenshots of the identical page on two reloads: I am 100% sure the CSS file does not change in between. I have examined both instances with Firebug and the CSS is identical. It happens in both Firefox and Chrome so it must be something basic I'm overlooking. What could be causing a browser not to load a resource at all? I have zero idea about Ruby nor Rails - getting Redmine running and customized is all I have ever had to do with this platform - so I don't really know where to look. Apache's, Mongrel's and Redmine's error logs look fine, though.

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Javascript: How to test JSONP on localhost

    - by hqt
    Here is the way I test JSONP: I run XAMPP, and in folder htdocs I create a javascript folder . I create json1.json file contain some data to process. After that, I run this html file locally, and it will call a function in "other machine" (in this case is localhost) Here is my code : <head> <script> function updateSales(sales) { alert('test jsonp'); // real code here } </script> </head> <body> <h1>JSONP Example</h1> <script src="http://localhost:85/javascript/json1.json?callback=updateSales"></script> </body> But when I run, nothing happen. But if change to other real json on the internet, it will work. It means change line : <script src="http://localhost:85/javascript/json1.json?callback=updateSales"></script> to line: <script src="http://gumball.wickedlysmart.com/?callback=updateSales"></script> It will work smoothly. So, I don't know how to test JSONP by using localhost, please help me. Thanks :)

    Read the article

  • Scrum - Responding to traditional RFPs

    - by Todd Charron
    Hi all, I've seen many articles about how to put together Agile RFP's and negotiating agile contracts, but how about if you're responding to a more traditional RFP? Any advice on how to meet the requirements of the RFP while still presenting an agile approach? A lot of these traditional RFP's request specific technical implementations, timelines, and costs, while also requesting exact details about milestones and how the technical solutions will be implemented. While I'm sure in traditional waterfall it's normal to pretend that these things are facts, it seems wrong to commit to something like this if you're an agile organization just to get through the initial screening process. What methods have you used to respond to more traditional RFP's? Here's a sample one grabbed from google, http://www.investtoronto.ca/documents/rfp-web-development.pdf Particularly, "3. A detailed work plan outlining how they expect to achieve the four deliverables within the timeframe outlined. Plan for additional phases of development." and "8. The detailed cost structure, including per diem rates for team members, allocation of hours between team members, expenses and other out of pocket disbursements, and a total upset price."

    Read the article

  • How many custom tabs can I add to a Facebook page?

    - by Maxi Ferreira
    I'm building a web application to create custom tabs and add them to the user's Facebook fanpages. I know how the process of "installing" FB apps into FB pages so they show up as Page Tabs works, but the problem is the client wants to allow the user to create unlimited Page Tabs for a single FB Page. So I have basically two questions. 1 - Can I resue a single FB App to be included into the same page several times? If so, is there a way to know what is the "id" of that Page Tab? So, if I have my FB App to look for the Tab content in http://www.mywebapp.com/tab/, I know I get a signed_request with the App ID and the Page ID, but if that same App is installed several times into the same Page, I don't know what's the Tab the user have cliked on. I know it's a little big messy, and I don't think there's a way to do this. So my next question is probably more accurate. 2 - Is there a limit on how many Tabs can I add to a single Facebook page? This way, if there's a limit of, say, 12 Tabs, I can create 12 FB Tab Apps, store the ID's and then I know which Tab of which Page the user is currently viewing. Thanks in advice! Maxi

    Read the article

< Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >