Search Results

Search found 3077 results on 124 pages for 'boost serialization'.

Page 112/124 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Tree iterator, can you optimize this any further?

    - by Ron
    As a follow up to my original question about a small piece of this code I decided to ask a follow up to see if you can do better then what we came up with so far. The code below iterates over a binary tree (left/right = child/next ). I do believe there is room for one less conditional in here (the down boolean). The fastest answer wins! The cnt statement can be multiple statements so lets make sure this appears only once The child() and next() member functions are about 30x as slow as the hasChild() and hasNext() operations. Keep it iterative <-- dropped this requirement as the recursive solution presented was faster. This is C++ code visit order of the nodes must stay as they are in the example below. ( hit parents first then the children then the 'next' nodes). BaseNodePtr is a boost::shared_ptr as thus assignments are slow, avoid any temporary BaseNodePtr variables. Currently this code takes 5897ms to visit 62200000 nodes in a test tree, calling this function 200,000 times. void processTree (BaseNodePtr current, unsigned int & cnt ) { bool down = true; while ( true ) { if ( down ) { while (true) { cnt++; // this can/will be multiple statesments if (!current->hasChild()) break; current = current->child(); } } if ( current->hasNext() ) { down = true; current = current->next(); } else { down = false; current = current->parent(); if (!current) return; // done. } } }

    Read the article

  • Fastest reliable way for Clojure (Java) and Ruby apps to communicate

    - by jkndrkn
    Hi There, We have cloud-hosted (RackSpace cloud) Ruby and Java apps that will interact as follows: Ruby app sends a request to Java app. Request consists of map structure containing strings, integers, other maps, and lists (analogous to JSON). Java app analyzes data and sends reply to Ruby App. We are interested in evaluating both messaging formats (JSON, Buffer Protocols, Thrift, etc.) as well as message transmission channels/techniques (sockets, message queues, RPC, REST, SOAP, etc.) Our criteria: Short round-trip time. Low round-trip-time standard deviation. (We understand that garbage collection pauses and network usage spikes can affect this value). High availability. Scalability (we may want to have multiple instances of Ruby and Java app exchanging point-to-point messages in the future). Ease of debugging and profiling. Good documentation and community support. Bonus points for Clojure support. What combination of message format and transmission method would you recommend? Why? I've gathered here some materials we have already collected for review: Comparison of various java serialization options Comparison of Thrift and Protocol Buffers (old) Comparison of various data interchange formats Comparison of Thrift and Protocol Buffers Fallacies of Protocol Buffers RPC features Discussion of RPC in the context of AMQP (Message-Queueing) Comparison of RPC and message-passing in distributed systems (pdf) Criticism of RPC from perspective of message-passing fan Overview of Avro from Ruby programmer perspective

    Read the article

  • How can I write a function template for all types with a particular type trait?

    - by TC
    Consider the following example: struct Scanner { template <typename T> T get(); }; template <> string Scanner::get() { return string("string"); } template <> int Scanner::get() { return 10; } int main() { Scanner scanner; string s = scanner.get<string>(); int i = scanner.get<int>(); } The Scanner class is used to extract tokens from some source. The above code works fine, but fails when I try to get other integral types like a char or an unsigned int. The code to read these types is exactly the same as the code to read an int. I could just duplicate the code for all other integral types I'd like to read, but I'd rather define one function template for all integral types. I've tried the following: struct Scanner { template <typename T> typename enable_if<boost::is_integral<T>, T>::type get(); }; Which works like a charm, but I am unsure how to get Scanner::get<string>() to function again. So, how can I write code so that I can do scanner.get<string>() and scanner.get<any integral type>() and have a single definition to read all integral types? Update: bonus question: What if I want to accept more than one range of classes based on some traits? For example: how should I approach this problem if I want to have three get functions that accept (i) integral types (ii) floating point types (iii) strings, respectively.

    Read the article

  • Portable way of counting milliseconds in C++ ?

    - by ereOn
    Hi, Is there any portable (Windows & Linux) way of counting how many milliseconds elapsed between two calls ? Basically, I want to achieve the same functionnality than the StopWatch class of .NET. (for those who already used it) In a perfect world, I would have used boost::date_time but that's not an option here due to some silly rules I'm enforced to respect. For those who better read code, this is what I'd like to achieve. Timer timer; timer.start(); // Some instructions here timer.stop(); // Print out the elapsed time std::cout << "Elapsed time: " << timer.milliseconds() << "ms" << std::endl; So, if there is a portable (set of) function(s) that can help me implement the Timer class, what is it ? If there is no such function, what Windows & Linux API should I use to achieve this functionnality ? (using #ifdef WINDOWS-like macros) Thanks !

    Read the article

  • C++/Qt - Memory allocation question

    - by HardCoder1986
    Hello! I recently started investigating Qt for myself and have the following question: Suppose I have some QTreeWidget* widget. At some moment I want to add some items to it and this is done via the following call: QList<QTreeWidgetItem*> items; // Prepare the items QTreeWidgetItem* item1 = new QTreeWidgetItem(...); QTreeWidgetItem* item2 = new QTreeWidgetItem(...); items.append(item1); items.append(item2); widget->addTopLevelItems(items); So far it looks ok, but I don't actually understand who should control the objects' lifetime. I should explain this with an example: Let's say, another function calls widget->clear();. I don't know what happens beneath this call but I do think that memory allocated for item1 and item2 doesn't get disposed here, because their ownage wasn't actually transfered. And, bang, we have a memory leak. The question is the following - does Qt have something to offer for this kind of situation? I could use boost::shared_ptr or any other smart pointer and write something like shared_ptr<QTreeWidgetItem> ptr(new QTreeWidgetItem(...)); items.append(ptr.get()); but I don't know if the Qt itself would try to make explicit delete calls on my pointers (which would be disastrous since I state them as shared_ptr-managed). How would you solve this problem? Maybe everything is evident and I miss something really simple?

    Read the article

  • Passing the Class<T> in java of a generic list?

    - by Rob Stevenson-Leggett
    I have a method for reading JSON from a service, I'm using Gson to do my serialization and have written the following method using type parameters. public T getDeserializedJSON(Class<T> aClass,String url) { Reader r = getJSONDataAsReader(url); Gson gson = new Gson(); return gson.fromJson(r, aClass); } I'm consuming json which returns just an array of a type e.g. [ { "prop":"value" } { "prop":"value" } ] I have a java class which maps to this object let's call it MyClass. However to use my method I need to do this: RestClient<ArrayList<MyClass>> restClient = new RestClient<ArrayList<MyClass>>(); ArrayList<MyClass> results = restClient.getDeserializedJSON(ArrayList<MyClass>.class, url); However, I can't figure out the syntax to do it. Passing just ArrayList.class doesn't work. So is there a way I can get rid of the Class parameter or how do I get the class of the ArrayList of MyClass?

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • How to retrive user input data entered tab bar View Controller when application start in iphone app

    - by TechFusion
    Hello, I have created window based application. Tab bar controller as root controller and it has three tabs. One Tab has Labels and TextFiled inputs like Name, Username and Password. I am looking to store this text filed inputs when user enters and able retrieve in other tabs. Previously I have set key for different text fields and setobject:withkey task and able to retrive text filed values in same view Controller [[NSUserDefaults standardUserDefaults] stringForKey:key] task. Now I am looking to create database which has different objects and each objects has data values of different Text Field inputs that I can access in whole application. like DatabaseName - Object1 - Name, Username & Password - Object2 - Name, Username & Password Something like structure in Normal C so it would be easy to retrieve data. I am looking NSUserDefaults Class and User Defaults Programming Topics in Cocoa(http://developer.apple.com/iPhone/library/documentation/Cocoa/Conceptual/UserDefaults/UserDefaults.html#//apple_ref/doc/uid/10000059-BCIDJFHD). Also Referring Archives and Serialization Programming guide(http://developer.apple.com/iphone/library/documentation/Cocoa/Conceptual/Archiving/Archiving.html#//apple_ref/doc/uid/10000047i). Which method i need to use to create such type of database ? Thanks,

    Read the article

  • Please suggest other ways of communicating between server & client.

    - by Geo
    I'm writing a TCP chat server ( programming language does not mather ). It's a school project for my nephew, so it won't be released, and all questions I'm asking are just for my knowledge :). . Some of the things it will support: chatting between users ( doh ), it will be multithreaded sending each other files I know I could easily get away with all the stuff above if I go with serialization, and just send objects from client to server and back. But, if I do that, it will be limited to a specific programming language ( meaning clients written in other programming languages may not be able to deserialize the objects ). What would be the way to go so that other clients written in other languages could be supported? One way to go, off the top of my head, would be to go in this direction: the server & the client communicate by sending messages & chunks ( in lieu of other names ). Here's what I mean by this: every time the client/server wants to send something ( text message or file ) it will first send a simple text message ( newline terminated ) with the number of the chunks it will send. Example: command 4,20,30,40,50 Where command would be something like instant-message or file,4 would be the number of chunks to be sent, 20 would be the size in bytes of the first chunk, 30 of the 2nd, and so forth. after the message was sent, the client/server will start sending chunks ( of sizes mentioned in the sent message ). What do you think about implementing the client/server communication this way? What better options are there?

    Read the article

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • Two references to the same domain/entity model

    - by Sbossb
    Problem I want to save the attributes of a model that have changed when a user edits them. Here's what I want to do ... Retrieve edited view model Get domain model and map back updated value Call the update method on repository Get the "old" domain model and compare values of the fields Store the changed values (in JSON) into a table However I am having trouble with step number 4. It seems that the Entity Framework doesn't want to hit the database again to get the model with the old values. It just returns the same entity I have. Attempted Solutions I have tried using the Find() and the SingleOrDefault() methods, but they just return the model I currently have. Example Code private string ArchiveChanges(T updatedEntity) { //Here is the problem! //oldEntity is the same as updatedEntity T oldEntity = DbSet.SingleOrDefault(x => x.ID == updatedEntity.ID); Dictionary<string, object> changed = new Dictionary<string, object>(); foreach (var propertyInfo in typeof(T).GetProperties()) { var property = typeof(T).GetProperty(propertyInfo.Name); //Get the old value and the new value from the models var newValue = property.GetValue(updatedEntity, null); var oldValue = property.GetValue(oldEntity, null); //Check to see if the values are equal if (!object.Equals(newValue, oldValue)) { //Values have changed ... log it changed.Add(propertyInfo.Name, newValue); } } var ser = new System.Web.Script.Serialization.JavaScriptSerializer(); return ser.Serialize(changed); } public override void Update(T entityToUpdate) { //Do something with this string json = ArchiveChanges(entityToUpdate); entityToUpdate.AuditInfo.Updated = DateTime.Now; entityToUpdate.AuditInfo.UpdatedBy = Thread.CurrentPrincipal.Identity.Name; base.Update(entityToUpdate); }

    Read the article

  • What to throw in a C++ class wrapping a C library ?

    - by ereOn
    I have to create a set of wrapping C++ classes around an existing C library. For many objects of the C library, the construction is done by calling something like britney_spears* create_britney_spears() and the opposite function void free_britney_spears(britney_spears* brit). If the allocation of a britney_spears fails, create_britney_spears() returns NULL. This is, as far as I know, a very common pattern. Now I want to wrap this inside a C++ class. //britney_spears.hpp class BritneySpears { public: BritneySpears(); private: boost::shared_ptr<britney_spears> m_britney_spears; }; And here is the implementation: // britney_spears.cpp BritneySpears::BritneySpears() : m_britney_spears(create_britney_spears(), free_britney_spears) { if (!m_britney_spears) { // Here I should throw something to abort the construction, but what ??! } } So the question is in the code sample: What should I throw to abort the constructor ? I know I can throw almost anything, but I want to know what is usually done. I have no other information about why the allocation failed. Should I create my own exception class ? Is there a std exception for such cases ? Many thanks.

    Read the article

  • how to pass vector of string to foo(char const *const *const)?

    - by user347208
    Hi, This is my first post so please be nice. I searched in this forum and googled but I still can not find the answer. This problem has bothered me for more than a day, so please give me some help. Thank you. I need to pass a vector of string to a library function foo(char const *const *const). I can not pass the &Vec[0] since it's a pointer to a string. Therefore, I have an array and pass the c_str() to that array. The following is my code (aNames is the vector of string): const char* aR[aNames.size()]; std::transform(aNames.begin(), aNames.end(), aR, boost::bind(&std::string::c_str, _1)); foo(aR); However, it seems it causes some undefined behavior: If I run the above code, then the function foo throw some warnings about illegal characters ('èI' blablabla) in aR. If I print aR before function foo like this: std::copy(aR, aR+rowNames.size(), std::ostream_iterator<const char*>(std::cout, "\n")); foo(aR); Then, everything is fine. My questions are: Does the conversion causes undefined behavior? If so, why? What is the correct way to pass vector of string to foo(char const *const *const)? Thank you very much for your help!

    Read the article

  • php / phpDoc - @return instance of $this class ?

    - by searbe
    How do I mark a method as "returns an instance of the current class" in my phpDoc? In the following example my IDE (Netbeans) will see that setSomething always returns a foo object. But that's not true if I extent the object - it'll return $this, which in the second example is a bar object not a foo object. class foo { protected $_value = null; /** * Set something * * @param string $value the value * @return foo */ public function setSomething($value) { $this->_value = $value; return $this; } } $foo = new foo(); $out = $foo->setSomething(); So fine - setSomething returns a foo - but in the following example, it returns a bar..: class bar extends foo { public function someOtherMethod(){} } $bar = new bar(); $out = $bar->setSomething(); $out->someOtherMethod(); // <-- Here, Netbeans will think $out // is a foo, so doesn't see this other // method in $out's code-completion ... it'd be great to solve this as for me, code completion is a massive speed-boost. Anyone got a clever trick, or even better, a proper way to document this with phpDoc?

    Read the article

  • How to create a class by reading from another class (.cs) file with Powershell?

    - by Razcer
    I have this POGO ( simple putter getter) class that I am trying to read in PowerShell using System; using System.Runtime.Serialization; namespace MyApp.VM { [Serializable] public class MyClassVM { public Int64 CtrId { get; set; } public string CtrName { get; set; } public string CtrPhone { get; set; } public string CtrZip { get; set; } public DateTime AddDate { get; set; } } } Here is the ps1 code that is trying to read the class from a file. function Build-Pogo { $FileDir = "D:\YourDirectoryOfPogo" $ClassName = "MyClassVM" $FileName = $FileDir + "\" + $ClassName + ".cs" # Build the class from the file $AllLines = [string]::join([environment]::newline, (Get-Content $FileName)) Add-Type -TypeDefinition $AllLines # spin thru each property for class $ClassHandle = New-Object -typeName $ClassName $ClassHandle | ForEach-Object {Write-Host $_.name -foregroundcolor cyan} } *Note the last line is placeholder for more complex logic to come later. This breaks at the Add-Type with this error message for each get/set in the file. 'MyApp.VM.MyClassVM.CtrId.get' must declare a body because it is not marked abstract or extern Any info on what I'm doing wrong will be greatly appreciated.

    Read the article

  • Is there any good reason for private methods existence in C# (and OOP in general)?

    - by Piotr Lopusiewicz
    I don't mean to troll but I really don't get it. Why would language designers allow private methods instead of some naming convention (see __ in Python) ? I searched for the answer and usual arguments are: a) To make the implementation cleaner/avoid long vertical list of methods in IDE autocompletion b) To announce to the world which methods are public interface and which may change and are just for implementation purpose c) Readability Ok so now, all of those could be achieved by naming all private methods with __ prefix or by "private" keyword which doesn't have any implications other than be information for IDE (don't put those in autocompletion) and other programers (don't use it unless you really must). Hell, one could even require unsafe-like keyword to access private methods to really discourage this. I am asking this because I work with some c# code and I keep changing private methods to public for test purposes as many in-between private methods (like string generators for xml serialization) are very useful for debugging purposes (like writing some part of string to log file etc.). So my question is: Is there anything which is achieved by access restriction but couldn't be achieved by naming conventions without restricting the access ?

    Read the article

  • C++0x Smart Pointer Comparisons: Inconsistent, what's the rationale?

    - by GManNickG
    In C++0x (n3126), smart pointers can be compared, both relationally and for equality. However, the way this is done seems inconsistent to me. For example, shared_ptr defines operator< be equivalent to: template <typename T, typename U> bool operator<(const shared_ptr<T>& a, const shared_ptr<T>& b) { return std::less<void*>()(a.get(), b.get()); } Using std::less provides total ordering with respect to pointer values, unlike a vanilla relational pointer comparison, which is unspecified. However, unique_ptr defines the same operator as: template <typename T1, typename D1, typename T2, typename D2> bool operator<(const unique_ptr<T1, D1>& a, const unique_ptr<T2, D2>& b) { return a.get() < b.get(); } It also defined the other relational operators in similar fashion. Why the change in method and "completeness"? That is, why does shared_ptr use std::less while unique_ptr uses the built-in operator<? And why doesn't shared_ptr also provide the other relational operators, like unique_ptr? I can understand the rationale behind either choice: with respect to method: it represents a pointer so just use the built-in pointer operators, versus it needs to be usable within an associative container so provide total ordering (like a vanilla pointer would get with the default std::less predicate template argument) with respect to completeness: it represents a pointer so provide all the same comparisons as a pointer, versus it is a class type and only needs to be less-than comparable to be used in an associative container, so only provide that requirement But I don't see why the choice changes depending on the smart pointer type. What am I missing? Bonus/related: std::shared_ptr seems to have followed from boost::shared_ptr, and the latter omits the other relational operators "by design" (and so std::shared_ptr does too). Why is this?

    Read the article

  • [c++/STL] Selective iterator

    - by rubenvb
    FYI: no boost, yes it has this, I want to reinvent the wheel ;) Is there some form of a selective iterator (possible) in C++? What I want is to seperate strings like this: some:word{or other to a form like this: some : word { or other I can do that with two loops and find_first_of(":") and ("{") but this seems (very) inefficient to me. I thought that maybe there would be a way to create/define/write an iterator that would iterate over all these values with for_each. I fear this will have me writing a full-fledged custom way-too-complex iterator class for a std::string. So I thought maybe this would do: std::vector<size_t> list; size_t index = mystring.find(":"); while( index != std::string::npos ) { list.push_back(index); index = mystring.find(":", list.back()); } std::for_each(list.begin(), list.end(), addSpaces(mystring)); This looks messy to me, and I'm quite sure a more elegant way of doing this exists. But I can't think of it. Anyone have a bright idea? Thanks PS: I did not test the code posted, just a quick write-up of what I would try

    Read the article

  • APC (PHP Cache) Uptime 0 minutes, not caching

    - by Jussi
    My goal is to implement APC for opcode cache for a drupal 6 production site. I have so far tested APC with several php files with and without including other php files with include_once. Also tried to tweak the apc.ini values for shm_size, apc.include_once_override and apc.stat. Restarted apache every time. Resulting in apc.php not showing any changes in any values. (except of course the changed apc.ini values are shown as they should) Every time i refresh the apc.php test page, the start time resets as the current time showing uptime 0 minutes. apc.php -testpage shows: General Cache InformationAPC Version 3.1.9 PHP Version 5.2.10 APC Host xxxx.xx.xx Server Software Apache/2.2.3 (CentOS) Shared Memory 1 Segment(s) with 128.0 MBytes (mmap memory, pthread mutex Locks locking) Start Time 2011/07/26 11:53:56 Uptime 0 minutes File Upload Support 1 Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 1 Request Rate (hits, misses) 2.00 cache requests/second Hit Rate 1.00 cache requests/second Miss Rate 1.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 16 apc.mmap_file_mask /tmp/apcphp5.095eRm apc.num_files_hint 1024 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.slam_defense 0 apc.stat 0 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Host Status Diagrams: Free: 128.0 MBytes (100.0%) Hits: 1 (50.0%) Used: 20.3 KBytes (0.0%) Misses: 1 (50.0%) Detailed Memory Usage and Fragmentation: Fragmentation: 0% phpinfo shows: Server API CGI/FastCGI APC: Version 3.1.9 APC Debugging Enabled MMAP Support Enabled MMAP File Mask /tmp/apcphp5.JkKDk7 Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Jul 21 2011 14:31:12 I followed these steps to find if suexec settings would prevent caching: http://www.litespeedtech.com/support/forum/showthread.php?t=4189 [root@host /]# ps -ef|grep lsphp root 20402 17833 0 11:21 pts/0 00:00:00 grep lsphp [root@host /]# ps -waux root 17833 0.0 0.1 5004 1484 pts/0 S 10:39 0:00 bash ..indicates that there is no lsphp running on the host also I read the following article and comments, concluding that in my case the problem is not the suexec as the user apache is the httpd process owner http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ also suexec command is not recognized when logged and launced as root @ host also i'm almost confident that there is no cPanel running on the host to check if a setting there would reset the running cache process at some interval This leaves me with few clues where to head next. I tried to set (with chown and chgrp) apache as the owner of the apc.php file and some test php files resulting in 500 server error. Is there a way to check if the file permissions prevent the apc stay running? I'm tremendously grateful for any suggestions or help.

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • Crashes and freezes after fixing "BOOTMGR is missing" error

    - by Greg-J
    I came back from a 3-day weekend to a computer that was off. I leave my PC on 24/7, so this was odd. Turn it on to get the dreaded "BOOTMGR is missing" screen. Two attempts at Windows Recovery and it booted into Windows fine. After an hour or so, I get a frozen Chrome and my start bar disappears. Ctrl+Alt+Del brings up an error box telling me that Ctrl+Alt+Del failed to work properly. Clicking on any open application triggers an error (I can't recall the error now, but it essentially just said that the application couldn't be found running or something along those lines). I restart, and again, the same thing happens after a while of use. I turn it on, install the 47 updates I have or so, and then restart it. After a while of use (under an hour), it just freezes completely. My thoughts are: SSDs, RAM or PS. My system specs below: (RAID0) 2 x Crucial M4 CT128M4SSD2 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) CORSAIR Vengeance 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CML16GX3M4A1600C9 CORSAIR HX Series HX750 750W ATX12V 2.3 / EPS12V 2.91 SLI Ready CrossFire Ready 80 PLUS GOLD Certified Modular Active 1 x ASUS Maximus IV Gene-Z/GEN3 LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard 1 x Hitachi GST Deskstar 7K1000.C 0F10383 1TB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive 1 x Intel Core i7-2600K Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 1 x SAPPHIRE 21197-00-40G Radeon HD 7970 3GB 384-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card 1 x Noctua NH-D14 120mm & 140mm SSO CPU Cooler This is all crammed in a pretty small case (NZXT Vulcan) and has been running perfectly problem-free since January. The only thing out of the ordinary is that there is a fan in the case that is now making noise whereas the case has previously been completely silent. I have no reason to believe this is anything more then correlation, but felt it is worth mentioning. I believe it MAY be the SSDs simply because of the BOOTMGR error, but not sure how to test that theory. My belief that it may be the RAM is simply from experience with frozen machines. I haven't had the time to memtest it, but will. The PS being the culprit is something I've picked up by reading similar threads on various forums, and it seems plausible. I am unsure how to test this though. ANY insight whatsover would be greatly appreciated!

    Read the article

  • How can I verify that my SSD is performing as it should?

    - by Jon Skeet
    EDIT: Okay, so I've no idea what caused the change, but after trying loads of different things to work out what was wrong, I've rerun the WEI (about the 4th time in total) and the score has jumped to a far more respectable 7.3. I'm going to leave well alone now :) I've got a brand new 256GB SSD (Crucial CT256M225) which should have stellar performance. However, on my (also brand new) Dell Studio 1557 with Windows 7 Professional 64 bit, it's only giving a performance index of 5.9. I realise the performance index should be taken with a bit of a pinch of salt, but I wonder whether something's wrong. Given this paragraph from this MSDN article on Windows 7, I'd expect to see a high 6.X or possible a 7.X figure: In Windows 7, there are new random read, random write and flush assessments. Better SSDs can score above 6.5 all the way to 7.9. To be included in that range, an SSD has to have outstanding random read rates and be resilient to flush and random write workloads. In the Beta timeframe of Windows 7, there was a capping of scores at 1.9, 2.9 or the like if a disk (SSD or HDD) didn’t perform adequately when confronted with our random write and flush assessments. Feedback on this was pretty consistent, with most feeling the level of capping to be excessive. As a result, we now simply restrict SSDs with performance issues from joining the newly added 6.0+ and 7.0+ ranges. SSDs that are not solid performers across all assessments effectively get scored in a manner similar to what they would have been in Windows Vista, gaining no Win7 boost for great random read performance. How can I diagnose any performance issues with either the disk or how Windows 7 is handling it? Are there any particularly good tools you'd recommend? One note of curiosity: I couldn't install the firmware update (to 1916) until I changed my BIOS handling of the drive to ATA mode; after installing the firmware I tried to boot the Windows installation DVD - but that only worked after turning it back to AHCI mode (which I've left it in). Installing Windows 7 took longer than I expected - it sat at the "Windows is loading files" prompt for a very long time. Likewise it was on "Expanding files (0%)" for a long time. Since installation it's been fine though - but I don't know whether it's really providing quite as beefy performance as it should. EDIT: My netbook with the 64GB equivalent drive has a performance index of 6.6...

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >