Search Results

Search found 15241 results on 610 pages for 'solaris operating environment'.

Page 538/610 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • Why would this Lua optimization hack help?

    - by Ian Boyd
    i'm looking over a document that describes various techniques to improve performance of Lua script code, and i'm shocked that such tricks would be required. (Although i'm quoting Lua, i've seen similar hacks in Javascript). Why would this optimization be required: For instance, the code for i = 1, 1000000 do local x = math.sin(i) end runs 30% slower than this one: local sin = math.sin for i = 1, 1000000 do local x = sin(i) end They're re-declaring sin function locally. Why would this be helpful? It's the job of the compiler to do that anyway. Why is the programmer having to do the compiler's job? i've seen similar things in Javascript; and so obviously there must be a very good reason why the interpreting compiler isn't doing its job. What is it? i see it repeatedly in the Lua environment i'm fiddling in; people redeclaring variables as local: local strfind = strfind local strlen = strlen local gsub = gsub local pairs = pairs local ipairs = ipairs local type = type local tinsert = tinsert local tremove = tremove local unpack = unpack local max = max local min = min local floor = floor local ceil = ceil local loadstring = loadstring local tostring = tostring local setmetatable = setmetatable local getmetatable = getmetatable local format = format local sin = math.sin What is going on here that people have to do the work of the compiler? Is the compiler confused by how to find format? Why is this an issue that a programmer has to deal with? Why would this not have been taken care of in 1993? i also seem to have hit a logical paradox: Optimizatin should not be done without profiling Lua has no ability to be profiled Lua should not be optimized

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • php error reporting - having trouble matching local & web server settings

    - by Andrew Heath
    I'm trying to add a custom error handler to my site, but in doing so have discovered that my webhost's PHP error reporting settings and those of my localhost (default XAMPP) vary considerably. While I thought I was programming to E_STRICT like a good little boy, adding the error handler to my webhost revealed craploads of Runtime Notices. Example: Runtime notice strtotime() [function.strtotime]: It is not safe to rely on the system's timezone settings. Please use the date.timezone setting, the TZ environment variable or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead In /home/... Clearly this isn't a red-alert, showstopping error. But what bothers me is that it doesn't show up on my localhost. I'd certainly like to improve my code by addressing these sorts of issues if I could see them! I've looked through both php.ini files, and my webhost's setting is error_reporting = E_ALL & ~E_NOTICE whereas mine was error_reporting = E_STRICT, which I had thought was better. However, changing mine to match and rebooting the server doesn't seem to have accomplished anything. Could someone please point me in the right direction?

    Read the article

  • std::string insert method has ambiguous overloads?

    - by sdg
    Environment: VS2005 C++ using STLPort 5.1.4. Compiling the following code snippet: std::string copied = "asdf"; char ch = 's'; copied.insert(0,1,ch); I receive an error: Error 1 error C2668: 'stlpx_std::basic_string<_CharT,_Traits,_Alloc>::insert' : ambiguous call to overloaded function It appears that the problem is the insert method call on the string object. The two defined overloads are void insert ( iterator p, size_t n, char c ); string& insert ( size_t pos1, size_t n, char c ); But given that STLPort uses a simple char* as its iterator, the literal zero in the insert method in my code is ambiguous. So while I can easily overcome the problem by hinting such as copied.insert(size_t(0),1,ch); My question is: is this overloading and possible ambiguity intentional in the specification? Or more likely an unintended side-effect of the specific STLPort implementation? (Note that the Microsoft-supplied STL does not have this problem as it has a class for the iterator, instead of a naked pointer)

    Read the article

  • How to Store and Retrieve Images Using MsSQL (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into an MsSQL database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the MsSQL environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated. Thank you very much for your responses!

    Read the article

  • Customers angry, fighting unknown DLL dependencies

    - by wheaties
    I'm a one man show developing a C++ Windows application for a customer. Over the past several months we've been running to the same problems with missing DLL dependencies on customer machines. Despite my best efforts something keeps going wrong and we get angry emails back. My boss and my boss's boss are angry with me and the customers aren't happy. I'm hoping you guys can help out and give suggestions/ideas on how to get the deliverables in order. Before some of the obvious: I have no test machine. That is, I can't replicate the customer environment nor attempt to install the app on a "clean" system to catch gotchas before shipping. I've tried using depends.exe to track down what versions of the DLLs my project is dependent upon. I'm shipping our code with the redistributables I've been able to find that way. After that it's an angry customer email waiting game. I'm required to use a third-party DLL which can not be registered (it's buggy as hell.) I'm not supposed to use Install Shield, any other automated installer, or write an install script. I provide written instructions on how to get the app installed (unzip, double click exe file.) I'm tired of taking heat for this stuff. What am I missing that I could be doing? What should I ask in terms of support from my employer? How should I ask for that support in a way that they'll provide it?

    Read the article

  • How can I optimize the SELECT statement running on an Oracle database?

    - by Elvis Lou
    I have a SELECT statement in ORACLE: SELECT COUNT(DISTINCT ds1.endpoint_msisdn) multiple30, dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status) subscription_status FROM daily_summary ds1 join daily_summary ds2 ON ds1.endpoint_msisdn = ds2.endpoint_msisdn, daily_summary_static dss1, daily_summary_static dss2, (SELECT NULL subscription_status FROM dual UNION ALL SELECT -2 subscription_status FROM dual) x WHERE ds1.summary_ts >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND ds1.summary_ts <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss1.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss2.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss2.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.service <> dss2.service AND ( dss1.company_scope = 2 OR dss1.company_scope = 5 ) AND ( dss2.company_scope = 2 OR dss2.company_scope = 5 ) AND dss1.company_scope = dss2.company_scope AND ds1.endpoint_noc_id = dss1.endpoint_noc_id AND ds1.endpoint_host_id = dss1.endpoint_host_id AND ds1.endpoint_instance_id = dss1.endpoint_instance_id AND ds2.endpoint_noc_id = dss2.endpoint_noc_id AND ds2.endpoint_host_id = dss2.endpoint_host_id AND ds2.endpoint_instance_id = dss2.endpoint_instance_id AND dss1.endpoint_provisioning_id = dss2.endpoint_provisioning_id AND Least(1, ds1.total_actions) = 1 AND Least(1, ds2.total_actions) = 1 GROUP BY dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status); This query took about 26 minutes to return in my environment, but if I remove the section: dss1.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss1.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND dss2.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss2.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND it only took 20 seconds to run. We have index on the column last_active, I don't know why the section slow down the performance so much? any ideas?

    Read the article

  • Mercurial repository usage with binary files for building setup files

    - by Ryan
    I have an existing Mercurial repository for a C++ application in a small corporate environment. I asked a co-worker to add the setup script to the repository and he added all of the dependency binaries, PDFs, and executable to the repository under an Install directory. I dislike having the binaries and dependencies in the same repository, but I'd like recommendations on best practices. Here are the options I am considering: Create a separate repository for the Installer and related files Create a subrepository for the Installer and related files Use a (yet to be identified) build dependency manager I am concerned with using a subrepository with Mercurial based on what I've read so far and the (apparently) incomplete implementation. I would like to get a project dependency system, e.g. Ivy, but I don't know all of the options and haven't had time yet to try out any options. I thought I'd use TortoiseHg as a basis, and it does not have the TortoiseHg binaries in the repository although it does have some binaries such as kdiff3.exe. Instead it uses setup.py to clone multiple repositories and build the apps. This seems reasonable for OSS, but not so much for corporate environments. Recommendations?

    Read the article

  • Drawbacks with using Class Methods in Objective C.

    - by RickiG
    Hi I was wondering if there are any memory/performance drawbacks, or just drawbacks in general, with using Class Methods like: + (void)myClassMethod:(NSString *)param { // much to be done... } or + (NSArray*)myClassMethod:(NSString *)param { // much to be done... return [NSArray autorelease]; } It is convenient placing a lot of functionality in Class Methods, especially in an environment where I have to deal with memory management(iPhone), but there is usually a catch when something is convenient? An example could be a thought up Web Service that consisted of a lot of classes with very simple functionality. i.e. TomorrowsXMLResults; TodaysXMLResults; YesterdaysXMLResults; MondaysXMLResults; TuesdaysXMLResults; . . . n I collect a ton of these in my Web Service Class and just instantiate the web service class and let methods on this class call Class Methods on the 'Results' Classes. The classes are simple but they handle large amount of Xml, instantiate lots of objects etc. I guess I am asking if Class Methods lives or are treated different on the stack and in memory than messages to instantiated objects? Or are they just instantiated and pulled down again behind the scenes and thus, just a way of saving a few lines of code?

    Read the article

  • Boost link error when using "--layout=system" on VS2005

    - by Kevin
    I'm new to boost, and thought I'd try it out with some realistic deployment scenarios for the .dlls, so I used the following command to compile/install the libraries: .\bjam install --layout=system variant=debug runtime-link=shared link=shared --with-date_time --with-thread --with-regex --with-filesystem --includedir=<my include directory> --libdir=<my bin directory> > installlog.txt That seemed to work, but my simple program (taken right from the "Getting Started" page) fails: #include <boost/regex.hpp> #include <iostream> #include <string> // Place your functions after this line int main() { std::string line; boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" ); while (std::cin) { std::getline(std::cin, line); boost::smatch matches; if (boost::regex_match(line, matches, pat)) std::cout << matches[2] << std::endl; } } This fails with the following linker error: fatal error LNK1104: cannot open file 'libboost_regex-vc80-mt-1_42.lib' I'm sure that both the .lib and the .dlls are in that directory, and named how I want them to be (ie: boost_regex.lib, etc, all unversioned, as the --layout=system says). So why is it looking for the versioned type of it? And how do I get it to look for the unversioned type of the library? I've tried this with more "normal" options, such as below: .\bjam stage --build-type=complete --with-date_time --with-thread --with-filesystem --with-regex > mybuildlog.txt And that works fine. I made sure my compiler saw the "stage\lib" directory, and it compiled and ran fine with nothing beyond having the environment looking into the right lib directory. But when I took those "testing" directories away, and wanted to use these others (unversioned), then it failed. I'm under VS2005 here on XP. Any ideas?

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Django: DatabaseLockError exception with Djapian

    - by jul
    Hi, I've got the exception shown below when executing indexer.update(). I have no idea about what to do: it used to work and now index database seems "locked". Anybody can help? Thanks Environment: Request Method: POST Request URL: http://piem.org:8000/restaurant/add/ Django Version: 1.1.1 Python Version: 2.5.2 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.comments', 'django.contrib.sites', 'django.contrib.admin', 'registration', 'djapian', 'resto', 'multilingual'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.locale.LocaleMiddleware', 'multilingual.middleware.DefaultLanguageMiddleware') Traceback: File "/var/lib/python-support/python2.5/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/jul/atable/../atable/resto/views.py" in addRestaurant 639. Restaurant.indexer.update() File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/indexer.py" in update 181. database = self._db.open(write=True) File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/database.py" in open 20. xapian.DB_CREATE_OR_OPEN, File "/usr/lib/python2.5/site-packages/xapian.py" in __init__ 2804. _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args)) Exception Type: DatabaseLockError at /restaurant/add/ Exception Value: Unable to acquire database write lock on /home/jul/atable /djapian_spaces/resto/restaurant/resto.index.restaurantindexer: already locked

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Automaticaly update ActiveRecord object

    - by Aleksandr Koss
    I have same models: class Father < ActiveRecord::Base has_many :children end class Child < ActiveRecord::Base  belongs_to :father end Then do something like that: $ script/console test Loading test environment (Rails 2.3.5) >> @f1 = Father.create :test => "Father" => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f2 = Father.find :first => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f1 == @f2 => true >> @f1.children => [] >> @f2.children => [] >> @f1.children.create :test => "Child1" => #<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15"> >> @f1.children => [#<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15">] >> @f2.children => [] >> @f2.reload => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f2.children => [#<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15">] As you see rails cache @f2 object. To get actual data we should call reload. There is a way to automatically reload @f2 after children update without calling method "reload"?

    Read the article

  • Loading an XML configuration file BEFORE the flex application loads

    - by Shahar
    Hi, We are using an XML file as an external configuration file for several parameters in our application (including default values for UI components and properties values of some service layer objects). The idea is to be able to load the XML configuration file before the flex application initializes any of its components. This is crucial because XML loading is processed a-synchronously in flex, which can potentially cause race-conditions in the application. For example: the configuration file holds the endpoint URL of a web service used to obtain data from the server. The URL resides in the XML because we want to allow our users to alter the endpoint URL according to their environment. Now because the endpoint URL is retrieved only after the XML has been completely loaded, some of the application's components might be invoking operations on this web service before it is initialized with the correct endpoint. The trivial solution would have been to suspend the initialization of the application until the complete event is dispatched by the loader. But it appears that this solution is far from being trivial. I haven't found a single solution that allows me to load the XML before any other object in the application. Can anyone advice or comment on this matter? Regards, Shahar

    Read the article

  • How can I obtain the local TCP port and IP Address of my client program?

    - by Dr Dork
    Hello! I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code and client side code setup to communicate. Currently, my client code successfully connects to the server code and the server code sends it a test message, then both quit out. Perfect! That's exactly what I wanted to accomplish. Now I'm playing around with the functions used to obtain info about the two environments (server and client). I'd like to obtain the local IP address and dynamically assigned TCP port of the client. The function I've found to do this is getsockname()... //setup the socket if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("client: socket"); continue; } //Retrieve the locally-bound name of the specified socket and store it in the sockaddr structure sa_len = sizeof(sa); getsock_check = getsockname(sockfd,(struct sockaddr *)&sa,(socklen_t *)&sa_len) ; if (getsock_check== -1) { perror("getsockname"); exit(1); } printf("Local IP address is: %s\n", inet_ntoa(sa.sin_addr)); printf("Local port is: %d\n", (int) ntohs(sa.sin_port)); but the output is always zero... Local IP address is: 0.0.0.0 Local port is: 0 does anyone see anything I might be or am definitely doing wrong? Thanks so much in advance for all your help!

    Read the article

  • Sweave can't see a vector if run from a function ?

    - by PaulHurleyuk
    I have a function that sets a vector to a string, copies a Sweave document with a new name and then runs that Sweave. Inside the Sweave document I want to use the vector I set in the function, but it doesn't seem to see it. (Edit: I changed this function to use tempdir(() as suggested by Dirk) I created a sweave file test_sweave.rnw; % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{Sweave} \begin{document} \title{Test Sweave Document} \author{gb02413} \maketitle <<>>= ls() Sys.time() print(paste("The chosen study was ",chstud,sep="")) @ \end{document} and I have this function; onOK <- function(){ chstud<-"test" message(paste("Chosen Study is ",chstud,sep="")) newfile<-paste(chstud,"_report",sep="") mypath<-paste(tempdir(),"\\",sep="") setwd(mypath) message(paste("Copying test_sweave.Rnw to ",paste(mypath,newfile,".Rnw",sep=""),sep="")) file.copy("c:\\local\\test_sweave.Rnw", paste(mypath,newfile,".Rnw",sep=""), overwrite=TRUE) Sweave(paste(mypath,newfile,".Rnw",sep="")) require(tools) texi2dvi(file = paste(mypath,newfile,".tex",sep=""), pdf = TRUE) } If I run the code from the function directly, the resulting file has this output for ls(); > ls() [1] "chstud" "mypath" "newfile" "onOK" However If I call onOK() I get this output; > ls() [1] "onOK" and the print(...chstud...)) function generates an error. I suspect this is an environment problem, but I assumed because the call to Sweave occurs within the onOK function, it would be in the same enviroment, and would see all the objects created within the function. How can I get the Sweave process to see the chstud vector ? Thanks Paul.

    Read the article

  • Help me with the simplest program for "Trusted" application

    - by idazuwaika
    Hi, I hope anyone from the large community here can help me write the simplest "Trusted" program that I can expand from. I'm using Ubuntu Linux 9.04, with TPM emulator 0.60 from Mario Strasser (http://tpm-emulator.berlios.de/). I have installed the emulator and Trousers, and can successfully run programs from tpm-tools after running tpmd and tcsd daemons. I hope to start developing my application, but I have problems compiling the code below. #include <trousers/tss.h> #include <trousers/trousers.h> #include <stdio.h> TSS_HCONTEXT hContext; int main() { Tspi_Context_Create(&hContext); Tspi_Context_Close(hContext); return 0; } After trying to compile with g++ tpm.cpp -o tpmexe I receive errors undefined reference to 'Tspi_Context_Create' undefined reference to 'Tspi_Context_Close' What do I have to #include to successfully compile this? Is there anything that I miss? I'm familiar with C, but not exactly so with Linux/Unix programming environment. ps: I am a part time student in Master in Information Security programme. My involvement with programming has been largely for academic purposes.

    Read the article

  • Qt4Dotnet on Mac OS X

    - by Tony
    Hello everyone. I'm using Qt4Dotnet project in order to port application originally written in C# on Linux and Mac. Port to Linux hasn't taken much efforts and works fine. But Mac (10.4 Tiger) is a bit more stubborn. The problem is: when I try to start my application it throws an exception. Exception states that com.trolltech.qt.QtJambi_LibraryInitializer is unable to find all necessary ibraries. QtJambi library initializer uses java.library.path VM environment variable. This variable includes current working directory. I put all necessary libraries in a working directory. When I try to run the application from MonoDevelop IDE, initializer is able to load one library, but the other libraries are 'missing': An exception was thrown by the type initializer for com.trolltech.qt.QtJambi_LibraryInitializer --- java.lang.RuntimeException: Loading library failed, progress so far: No 'qtjambi-deployment.xml' found in classpath, loading libraries via 'java.library.path' Loading library: 'libQtCore.4.dylib'... - using 'java.library.path' - ok, path was: /Users/chin/test/bin/Debug/libQtCore.4.dylib Loading library: 'libqtjambi.jnilib'... - using 'java.library.path' Both libQtCore.4.dylib and libqtjambi.jnilib are in the same directory. When I try to run it from the command prompt, the initializer is unable to load even libQtCore.4.dylib. I'm using Qt4Dotnet v4.5.0 (currently the latest) with QtJambi v4.5.2 libraries. This might be the source of the problem, but I'm neither able to compile Qt4Dotnet v4.5.2 by myself nor to find QtJambi v4.5.0 libraries. Project's page states that some sort of patch should be applied to QtJambi's source code in order to be compatible with Mono framework, but this patch hasn't been released yet. Without this patch application crashes in a strange manner (other than library seek fault). I must note that original QtJambi loads all necessary libraries perfectly, so it might be issues of IKVM compiler used to translate QtJambi into .Net library. Any suggestions how can I overcome this problem?

    Read the article

  • Apache proxy to Lighttpd: changing $_SERVER['HTTP_HOST'] in php

    - by watain
    I have a WordPress blog running on lighttpd-1.4.19, listening on at www00:81. On the same host, apache-2.2.11 listens on port 80, which creates a proxy connection from http://blog.mydomain.org:80 to http://blog.mydomain.org:81. The Apache virtualhost looks as follows: <VirtualHost *:80> ServerName blog.mydomain.org ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://blog.mydomain.org:81/ ProxyPassReverse / http://blog.mydomain.org:81/ </VirtualHost> Using debug.log-request-handling = "enable" I get the following log entry when I browse http://blog.mydomain.org:80 (notice the Host headers): 2010-05-10 08:47:14: (request.c.294) fd: 6 request-len: 853 GET / HTTP/1.1 Host: blog.mydomain.org:81 [...] 2010-05-10 08:47:15: (request.c.294) fd: 8 request-len: 754 GET /wp-content/uploads/2010/01/image.gif?w=280 HTTP/1.1 Host: www00:81 My problem: as far as I know, the PHP environment variable $_SERVER['HTTP_HOST'] is set to that Host header variable. Unfortunately, WordPress uses that variable in their system to create URLs to pictures on the blog. These URLs won't be accessible behind a firewall of course. How can I force the host header to be blog.mydomain.org instead of blog.mydomain.org:81, respectively www00:81? I already added set server.name = "blog.mydomain.org" to my lighttpd.conf, but this didn't work. Any suggestions are appreciated, thank you.

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • How to disable server-side caching on IIS 7.5 (asp net mvc3)

    - by troebr
    I'm struggling with my IIS setup regarding caching, here's a brief description of my problem: I'm making a site for mobile and non-mobile, sharing the same controllers. IE: mysite/page will serve either mysite/page.cshtml, or mysite/M/page.cshtml, depending on the device. Here's the catch, it worked fine with my local and integration environment (cassiini and iis 6), but on another machine (2008r2/iis 7.5), apparently there is an aggressive server-side caching policy: If I access the website from a desktop machine, I have the correct pages (desktop version) If now I use my mobile phone to access the site, I will have the desktop version, (which implies a server-side cache, my phone is not using the same network). On the contrary, if I were to restart the server and access the site using my phone first, then I will get the mobile version on my desktop (only for the pages I already visited of course). I tried 2 solutions so far: Disabling OutputCache from my Web.config: <httpModules> [..] <remove name="OutputCache" /> </httpModules> And unchecking "Enable output cache" in "Output Caching" for my site in IIS. What's bugging me is that I do not have this problem with my other server (iis 6.0), although caching is enabled on this one, which leads me to think it is related to iis 7 caching addition. My question is simple: how does one disable server-side caching on IIS 7.5? Thanks in advance for your iis lights!

    Read the article

  • Automatic Deployment of Windows Application

    - by dileepkrishnan
    Hi, We have setup continuos integration in our development environment using SVN, CC.Net, MSBuild and Nunit. Now, we want to automate the process of moving (copying) builds from one stage to another like this: Whenever a new build succeeds in Dev, that should be copied automatically to the QA server (a folder on the QA server, to be exact) Whenever a QA build succeeds tests in QA, that QA build should be copied to the UAT server (a folder on the UAT server, to be exact). This should be implemented as a process (a CC task, for example) which we can start when QA succeeds. Whenever a UAT build succeeds tests in UAT, that should be copied to the PROD server (a folder on the PROD server, to be exact). This should be implemented as a process (a CC task, for example) which we can start when UAT succeeds. How do I implement this? Can this be done using CC.Net alone? Or, can this be done using MSBuild? Or, do I need to employ both? Please advise what exactly needs to be done. Thanks Dileep Krishnan

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >