Search Results

Search found 20099 results on 804 pages for 'virtual host'.

Page 762/804 | < Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Django Encoding Issues with MySQL

    - by Jordan Reiter
    Okay, so I have a MySQL database set up. Most of the tables are latin1 and Django handles them fine. But, some of them are UTF-8 and Django does not handle them. Here's a sample table (these tables are all from django-geonames): DROP TABLE IF EXISTS `geoname`; SET @saved_cs_client = @@character_set_client; SET character_set_client = utf8; CREATE TABLE `geoname` ( `id` int(11) NOT NULL, `name` varchar(200) NOT NULL, `ascii_name` varchar(200) NOT NULL, `latitude` decimal(20,17) NOT NULL, `longitude` decimal(20,17) NOT NULL, `point` point default NULL, `fclass` varchar(1) NOT NULL, `fcode` varchar(7) NOT NULL, `country_id` varchar(2) NOT NULL, `cc2` varchar(60) NOT NULL, `admin1_id` int(11) default NULL, `admin2_id` int(11) default NULL, `admin3_id` int(11) default NULL, `admin4_id` int(11) default NULL, `population` int(11) NOT NULL, `elevation` int(11) NOT NULL, `gtopo30` int(11) NOT NULL, `timezone_id` int(11) default NULL, `moddate` date NOT NULL, PRIMARY KEY (`id`), KEY `country_id_refs_iso_alpha2_e2614807` (`country_id`), KEY `admin1_id_refs_id_a28cd057` (`admin1_id`), KEY `admin2_id_refs_id_4f9a0f7e` (`admin2_id`), KEY `admin3_id_refs_id_f8a5e181` (`admin3_id`), KEY `admin4_id_refs_id_9cc00ec8` (`admin4_id`), KEY `fcode_refs_code_977fe2ec` (`fcode`), KEY `timezone_id_refs_id_5b46c585` (`timezone_id`), KEY `geoname_52094d6e` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; SET character_set_client = @saved_cs_client; Now, if I try to get data from the table directly using MySQLdb and a cursor, I get the text with the proper encoding: >>> import MySQLdb >>> from django.conf import settings >>> >>> conn = MySQLdb.connect (host = "localhost", ... user = settings.DATABASES['default']['USER'], ... passwd = settings.DATABASES['default']['PASSWORD'], ... db = settings.DATABASES['default']['NAME']) >>> cursor = conn.cursor () >>> cursor.execute("select name from geoname where name like 'Uni%Hidalgo'"); 1L >>> g = cursor.fetchone() >>> g[0] 'Uni\xc3\xb3n Hidalgo' >>> print g[0] Unión Hidalgo However, if I try to use the Geoname model (which is actually a django.contrib.gis.db.models.Model), it fails: >>> from geonames.models import Geoname >>> g = Geoname.objects.get(name__istartswith='Uni',name__icontains='Hidalgo') >>> g.name u'Uni\xc3\xb3n Hidalgo' >>> print g.name Unión Hidalgo There's pretty clearly an encoding error here. In both cases the database is returning 'Uni\xc3\xb3n Hidalgo' but Django is (incorrectly?) translating the '\xc3\xb3n' to ó. What can I do to fix this?

    Read the article

  • ELMAH - Using custom error pages to collecting user feedback

    - by vdh_ant
    Hey guys I'm looking at using ELMAH for the first time but have a requirement that needs to be met that I'm not sure how to go about achieving... Basically, I am going to configure ELMAH to work under asp.net MVC and get it to log errors to the database when they occur. On top of this I be using customErrors to direct the user to a friendly message page when an error occurs. Fairly standard stuff... The requirement is that on this custom error page I have a form which enables to user to provide extra information if they wish. Now the problem arises due to the fact that at this point the error is already logged and I need to associate the loged error with the users feedback. Normally, if I was using my own custom implementation, after I log the error I would pass through the ID of the error to the custom error page so that an association can be made. But because of the way that ELMAH works, I don't think the same is quite possible. Hence I was wondering how people thought that one might go about doing this.... Cheers UPDATE: My solution to the problem is as follows: public class UserCurrentConextUsingWebContext : IUserCurrentConext { private const string _StoredExceptionName = "System.StoredException."; private const string _StoredExceptionIdName = "System.StoredExceptionId."; public virtual string UniqueAddress { get { return HttpContext.Current.Request.UserHostAddress; } } public Exception StoredException { get { return HttpContext.Current.Application[_StoredExceptionName + this.UniqueAddress] as Exception; } set { HttpContext.Current.Application[_StoredExceptionName + this.UniqueAddress] = value; } } public string StoredExceptionId { get { return HttpContext.Current.Application[_StoredExceptionIdName + this.UniqueAddress] as string; } set { HttpContext.Current.Application[_StoredExceptionIdName + this.UniqueAddress] = value; } } } Then when the error occurs, I have something like this in my Global.asax: public void ErrorLog_Logged(object sender, ErrorLoggedEventArgs args) { var item = new UserCurrentConextUsingWebContext(); item.StoredException = args.Entry.Error.Exception; item.StoredExceptionId = args.Entry.Id; } Then where ever you are later you can pull out the details by var item = new UserCurrentConextUsingWebContext(); var error = item.StoredException; var errorId = item.StoredExceptionId; item.StoredException = null; item.StoredExceptionId = null; Note this isn't 100% perfect as its possible for the same IP to have multiple requests to have errors at the same time. But the likely hood of that happening is remote. And this solution is independent of the session, which in our case is important, also some errors can cause sessions to be terminated, etc. Hence why this approach has worked nicely for us.

    Read the article

  • NHibernate FetchMode.Lazy

    - by RyanFetz
    I have an object which has a property on it that has then has collections which i would like to not load in a couple situations. 98% of the time i want those collections fetched but in the one instance i do not. Here is the code I have... Why does it not set the fetch mode on the properties collections? [DataContract(Name = "ThemingJob", Namespace = "")] [Serializable] public class ThemingJob : ServiceJob { [DataMember] public virtual Query Query { get; set; } [DataMember] public string Results { get; set; } } [DataContract(Name = "Query", Namespace = "")] [Serializable] public class Query : LookupEntity<Query>, DAC.US.Search.Models.IQueryEntity { [DataMember] public string QueryResult { get; set; } private IList<Asset> _Assets = new List<Asset>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Asset> Assets { get { return _Assets; } set { _Assets = value; } } private IList<Theme> _Themes = new List<Theme>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Theme> Themes { get { return _Themes; } set { _Themes = value; } } private IList<Affinity> _Affinity = new List<Affinity>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Affinity> Affinity { get { return _Affinity; } set { _Affinity = value; } } private IList<Word> _Words = new List<Word>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Word> Words { get { return _Words; } set { _Words = value; } } } using (global::NHibernate.ISession session = NHibernateApplication.GetCurrentSession()) { global::NHibernate.ICriteria criteria = session.CreateCriteria(typeof(ThemingJob)); global::NHibernate.ICriteria countCriteria = session.CreateCriteria(typeof(ThemingJob)); criteria.AddOrder(global::NHibernate.Criterion.Order.Desc("Id")); var qc = criteria.CreateCriteria("Query"); qc.SetFetchMode("Assets", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Themes", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Affinity", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Words", global::NHibernate.FetchMode.Lazy); pageIndex = Convert.ToInt32(pageIndex) - 1; // convert to 0 based paging index criteria.SetMaxResults(pageSize); criteria.SetFirstResult(pageIndex * pageSize); countCriteria.SetProjection(global::NHibernate.Criterion.Projections.RowCount()); int totalRecords = (int)countCriteria.List()[0]; return criteria.List<ThemingJob>().ToPagedList<ThemingJob>(pageIndex, pageSize, totalRecords); }

    Read the article

  • How to show percentage of 'memory used' in a win32 process?

    - by pj4533
    I know that memory usage is a very complex issue on Windows. I am trying to write a UI control for a large application that shows a 'percentage of memory used' number, in order to give the user an indication that it may be time to clear up some memory, or more likely restart the application. One implementation used ullAvailVirtual from MEMORYSTATUSEX as a base, then used HeapWalk() to walk the process heap looking for additional free memory. The HeapWalk() step was needed because we noticed that after a while of running the memory allocated and freed by the heap was never returned and reported by the ullAvailVirtual number. After hours of intensive working, the ullAvailVirtual number no longer would accurately report the amount of memory available. However, this method proved not ideal, due to occasional odd errors that HeapWalk() would return, even when the process heap was not corrupted. Further, since this is a UI control, the heap walking code was executing every 5-10 seconds. I tried contacting Microsoft about why HeapWalk() was failing, escalated a case via MSDN, but never got an answer other than "you probably shouldn't do that". So, as a second implementation, I used PagefileUsage from PROCESS_MEMORY_COUNTERS as a base. Then I used VirtualQueryEx to walk the virtual address space adding up all regions that weren't MEM_FREE and returned a value for GetMappedFileNameA(). My thinking was that the PageFileUsage was essentially 'private bytes' so if I added to that value the total size of the DLLs my process was using, it would be a good approximation of the amount of memory my process was using. This second method seems to (sorta) work, at least it doesn't cause crashes like the heap walker method. However, when both methods are enabled, the values are not the same. So one of the methods is wrong. So, StackOverflow world...how would you implement this? which method is more promising, or do you have a third, better method? should I go back to the original method, and further debug the odd errors? should I stay away from walking the heap every 5-10 seconds? Keep in mind the whole point is to indicate to the user that it is getting 'dangerous', and they should either free up memory or restart the application. Perhaps a 'percentage used' isn't the best solution to this problem? What is? Another idea I had was a color based system (red, yellow, green, which I could base on more factors than just a single number)

    Read the article

  • Delaying emails in PHP to avoid exceeding server limit

    - by Andrew P.
    Okay, so here's my problem: I have a list of members on a website, and periodically one of the admins my site (who are not very web or tech savvy) will send a newsletter to the memberlist. My current memberlist is well over 800 individuals long. So, I wrote an email script that sends the email to the full memberlist, with the members listed in the Bcc header. However, I've discovered that my host server has a limit of 300 emails per hour, which I apparently exceed even though the members are listed in the Bcc field. (I wasn't previously aware that the behaviour of Bcc was to send separate emails for each name on the list...) After some thought, I've come to the conclusion that my only solution is to have my script send only the email to only the first 300 emails, wait an hour, and send a second email to the next three hundred, wait another hour, and so on until I've sent the email to the whole member list. Looking around on the internet, I've seen some other solutions people have come up with for delaying emails in PHP. Sleep() is obviously not an option, because I can't just leave the script open and running for 3 or four hours. I've seen some people suggest cron jobs, but I'm not sure how feasible it would be to create three new cron jobs every time I send an email, use them once, and then delete them afterward. The final (and what I think is the smartest) solution I've seen, is to have a table in my database to temporarily store the emails to be delayed and sent later, and then create a cron job that checks this sql table every hour or so, compares the timestamp of the row to the current timestamp, and then sends the email if an hour has passed. So I'm asking you all which method you would recommend. Is there an easier solution that I've completely looked over (aside from getting a different hosting plan. ha!), or is there a cleaner way to do it than the database / cron job approach? tl;dr: I have 800 emails to send in an hour on a server that limits me to 300/hr. Using PHP, find a way to get around this problem in a way that the person sending the email needs only to click "send."

    Read the article

  • The remote server returned an error: (400) Bad Request

    - by pravakar
    Hi, I am getting the following errors: "The remote server returned an error: (400) Bad Request" "Requested time out" sometimes when connecting to a host using a web service. If the XML returned is 5 kb then it is working fine, but if the size is 450kb or above it is displaying the error. Below is my code as well as the config file that resides at the client system. We don't have access to the settings of the server. Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Dim fileName As String = Server.MapPath("capitaljobs2.xml") Dim client = New CapitalJobsService.DataServiceClient("WSHttpBinding_IDataService", "http://xyz/webservice.svc") Dim userAccount = New UserAccount() 'replace here Dim jobAdList = client.GetProviderJobs(userAccount) '## Needed only to create XML files - do not ucomment - will overwrite files 'if (jobAdList != null) ' SerialiseJobAds(fileName, jobAdList); '## Read new ads from Xml file Dim capitalJobsList = DeserialiseJobdAds(fileName) UpdateProviderJobsFromXml(client, userAccount, capitalJobsList) client.Close() End Sub Private Shared Function DeserialiseJobdAds(ByVal fileName As String) As CapitalJobsService.CapitalJobsList Dim capitalJobsList As CapitalJobsService.CapitalJobsList ' Deserialize the data and read it from the instance If File.Exists(fileName) Then Dim fs = New FileStream(fileName, FileMode.Open) Dim reader = XmlDictionaryReader.CreateTextReader(fs, New XmlDictionaryReaderQuotas()) Dim ser2 = New DataContractSerializer(GetType(CapitalJobsList)) capitalJobsList = DirectCast(ser2.ReadObject(reader, True), CapitalJobsList) reader.Close() fs.Close() Return capitalJobsList End If Return Nothing End Function And the config file <system.web> <httpRuntime maxRequestLength="524288" /> </system.web> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IDataService" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2000000" maxStringContentLength="2000000" maxArrayLength="2000000" maxBytesPerRead="2000000" maxNameTableCharCount="2000000" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://xyz/DataService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IDataService" contract="CapitalJobsService.IDataService" name="WSHttpBinding_IDataService"> <identity> <dns value="localhost"/> </identity> </endpoint> </client> </system.serviceModel> I am using "Fiddler" to track the activities it is reading and terminating file like * FIDDLER: RawDisplay truncated at 16384 characters. Right-click to disable truncation. * But in config the number 16348 is not mentioned anywhere. Can you figure out if the error is on client or server side? The settings above are on the client side. Thanks in advance.

    Read the article

  • Call asp.net web service from php

    - by SamB09
    Hi im calling an aps.net web service from a php service. The service searches both databases with a search parameter. Im not sure how to pass a search parameter to the asp.net service. Code is below. ( there is no search paramter currently but im just interested in how it would be passed to the asp.net service) $link = mysql_connect($host, $user, $passwd); mysql_select_db($dbName); $query = 'SELECT firstname, surname, phone, location FROM staff ORDER BY surname'; $result = mysql_query($query,$link); // if there is a result if ( mysql_num_rows($result) > 0 ) { // set up a DOM object $xmlDom1 = new DOMDocument(); $xmlDom1->appendChild($xmlDom1->createElement('directory')); $xmlRoot = $xmlDom1->documentElement; // loop over the rows in the result while ( $row = mysql_fetch_row($result) ) { $xmlPerson = $xmlDom1->createElement('staff'); $xmlFname = $xmlDom1->createElement('fname'); $xmlText = $xmlDom1->createTextNode($row[0]); $xmlFname->appendChild($xmlText); $xmlPerson->appendChild($xmlFname); $xmlSname = $xmlDom1->createElement('sname'); $xmlText = $xmlDom1->createTextNode($row[1]); $xmlSname->appendChild($xmlText); $xmlPerson->appendChild($xmlSname); $xmlTel = $xmlDom1->createElement('phone'); $xmlText = $xmlDom1->createTextNode($row[2]); $xmlTel->appendChild($xmlText); $xmlPerson->appendChild($xmlTel); $xmlLoc = $xmlDom1->createElement('loc'); $xmlText = $xmlDom1->createTextNode($row[3]); $xmlLoc->appendChild($xmlText); $xmlPerson->appendChild($xmlLoc); $xmlRoot->appendChild($xmlPerson); } } // // instance a SOAP client to the dotnet web service and read it into a DOM object // (this really should have an exception handler) // $client = new SoapClient('http://stuiis.cms.gre.ac.uk/mk05/dotnet/dataBind01/phoneBook.asmx?WSDL'); $xmlString = $client->getDirectoryDom()->getDirectoryDomResult->any; $xmlDom2 = new DOMDocument(); $xmlDom2->loadXML($xmlString); // merge the second DOM object into the first foreach ( $xmlDom2->documentElement->childNodes as $staffNode ) { $xmlPerson = $xmlDom1->createElement($staffNode->nodeName); foreach ( $staffNode->childNodes as $xmlNode ) { $xmlElement = $xmlDom1->createElement($xmlNode->nodeName); $xmlText = $xmlDom1->createTextNode($xmlNode->nodeValue); $xmlElement->appendChild($xmlText); $xmlPerson->appendChild($xmlElement); } $xmlRoot->appendChild($xmlPerson); } // return result echo $xmlDom

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • How build my own Application Setting

    - by adisembiring
    I want to build a ApplicationSetting for my application. The ApplicationSetting can be stored in a properties file or in a database table. The settings are stored in key-value pairs. E.g. ftp.host = blade ftp.username = dummy ftp.pass = pass content.row_pagination = 20 content.title = How to train your dragon. I have designed it as follows: Application settings reader: interface IApplicationSettingReader { read(); } DatabaseApplicationSettingReader { dao appSettingDao; AppSettings read() { List<AppSettingEntity> listEntity = appSettingsDao.findAll(); Map<String, String> map = new HaspMap<String, String>(); foreach (AppSettingEntity entity : listEntity) { map.put(entity.getConfigName(), entity.getConfigValue()); } return new AppSettings(map); } } DatabaseApplicationSettingReader { dao appSettingDao; AppSettings read() { //read from some properties file return new AppSettings(map); } } Application settings class: AppSettings { private static AppSettings instance; private Map map; Public AppSettings(Map map) { this.map = map; } public static AppSettings getInstance() { if (instance == null) { throw new RuntimeException("Object not configure yet"); } return instance; } public static configure(IApplicationSettingReader reader) { instance = reader.read(); } public String getFtpSetting(String param) { return map.get("ftp." + param); } public String getContentSetting(String param) { return map.get("content." + param); } } Test class: AppSettingsTest { IApplicationSettingReader reader; @Before public void setUp() throws Exception { reader = new DatabaseApplicationSettingReader(); } @Test public void getContentSetting_should_get_content_title() { AppSettings.configure(reader); Instance settings = AppSettings.getInstance(); String title = settings.getContentSetting("title"); assertNotNull(title); Sysout(title); } } My questions are: Can you give your opinion about my code, is there something wrong? I configure my application setting once, while the application start, I configure the application setting with appropriate reader (DbReader or PropertiesReader), I make it singleton. The problem is, when some user edit the database or file directly to database or file, I can't get the changed values. Now, I want to implement something like ApplicationSettingChangeListener. So if the data changes, I will refresh my application settings. Do you have any suggestions how this can be implemented?

    Read the article

  • Mocking a concrete class : templates and avoiding conditional compilation

    - by AshirusNW
    I'm trying to testing a concrete object with this sort of structure. class Database { public: Database(Server server) : server_(server) {} int Query(const char* expression) { server_.Connect(); return server_.ExecuteQuery(); } private: Server server_; }; i.e. it has no virtual functions, let alone a well-defined interface. I want to a fake database which calls mock services for testing. Even worse, I want the same code to be either built against the real version or the fake so that the same testing code can both: Test the real Database implementation - for integration tests Test the fake implementation, which calls mock services To solve this, I'm using a templated fake, like this: #ifndef INTEGRATION_TESTS class FakeDatabase { public: FakeDatabase() : realDb_(mockServer_) {} int Query(const char* expression) { MOCK_EXPECT_CALL(mockServer_, Query, 3); return realDb_.Query(); } private: // in non-INTEGRATION_TESTS builds, Server is a mock Server with // extra testing methods that allows mocking Server mockServer_; Database realDb_; }; #endif template <class T> class TestDatabaseContainer { public: int Query(const char* expression) { int result = database_.Query(expression); std::cout << "LOG: " << result << endl; return result; } private: T database_; }; Edit: Note the fake Database must call the real Database (but with a mock Server). Now to switch between them I'm planning the following test framework: class DatabaseTests { public: #ifdef INTEGRATION_TESTS typedef TestDatabaseContainer<Database> TestDatabase ; #else typedef TestDatabaseContainer<FakeDatabase> TestDatabase ; #endif TestDatabase& GetDb() { return _testDatabase; } private: TestDatabase _testDatabase; }; class QueryTestCase : public DatabaseTests { public: void TestStep1() { ASSERT(GetDb().Query(static_cast<const char *>("")) == 3); return; } }; I'm not a big fan of that compile-time switching between the real and the fake. So, my question is: Whether there's a better way of switching between Database and FakeDatabase? For instance, is it possible to do it at runtime in a clean fashion? I like to avoid #ifdefs. Also, if anyone has a better way of making a fake class that mimics a concrete class, I'd appreciate it. I don't want to have templated code all over the actual test code (QueryTestCase class). Feel free to critique the code style itself, too. You can see a compiled version of this code on codepad.

    Read the article

  • MVVM Binding Orthogonal Aspects in Views e.g. Application Settings

    - by chibacity
    I have an application which I am developing using WPF\Prism\MVVM. All is going well and I have some pleasing MVVM implementations. However, in some of my views I would like to be able to bind application settings e.g. when a user reloads an application, the checkbox for auto-scrolling a grid should be checked in the state it was last time the user used the application. My view needs to bind to something that holds the "auto-scroll" setting state. I could put this on the view-model, but applications settings are orthogonal to the purpose of the view-model. The "auto-scroll" setting is controlling an aspect of the view. This setting is just an example. There will be quite a number of them and splattering my view-models with properties to represent application settings (so I can bind them) feels decidedly yucky. One view-model per view seems to be de rigeuer... What is best\usual practice here? Splatter my view-models with application settings? Have multiple view-models per view so settings can be represented in their own right? Split views so that controls can bind to an ApplicationSettingsViewModel? = too many views? Something else? Edit 1 To add a little more context, I am developing a UI with a tabbed interface. Each tab will host a single widget and there a variety of widgets. Each widget is a Prism composition of individual views. Some views are common amongst widgets e.g. a file picker view. Whilst each widget is composed of several views, as a whole, conceptually a widget has a single set of user settings e.g. last file selected, auto-scroll enabled, etc. These need to be persisted and retrieved\applied when the application starts again, and the widget views are created. My question is focused on the fact that conceptually a widget has a single set of user settings which is at right-angles to the fact that a widget consists of many views. Each view in the widget has it's own view-model (which works nicely and logically) but if I stick to a one view-model per view, I would have to splatter each view-model with user settings appropriate to it. This doesn't sound right ?!?

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • How to view ASMX SOAP using Fiddler2?

    - by outer join
    Does anyone know if Fiddler can display the raw SOAP messages for ASMX web services? I'm testing a simple web service using both Fiddler2 and Storm and the results vary (Fiddler shows plain xml while Storm shows the SOAP messages). See sample request/responses below: Fiddler2 Request: POST /webservice1.asmx/Test HTTP/1.1 Accept: */* Referer: http://localhost.:4164/webservice1.asmx?op=Test Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; MS-RTC LM 8) Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate Host: localhost.:4164 Content-Length: 0 Connection: Keep-Alive Pragma: no-cache Fiddler2 Response: HTTP/1.1 200 OK Server: ASP.NET Development Server/9.0.0.0 Date: Thu, 21 Jan 2010 14:21:50 GMT X-AspNet-Version: 2.0.50727 Cache-Control: private, max-age=0 Content-Type: text/xml; charset=utf-8 Content-Length: 96 Connection: Close <?xml version="1.0" encoding="utf-8"?> <string xmlns="http://tempuri.org/">Hello World</string> Storm Request (body only): <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <Test xmlns="http://tempuri.org/" /> </soap:Body> </soap:Envelope> Storm Response: Status Code: 200 Content Length : 339 Content Type: text/xml; charset=utf-8 Server: ASP.NET Development Server/9.0.0.0 Status Description: OK <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <TestResponse xmlns="http://tempuri.org/"> <TestResult>Hello World</TestResult> </TestResponse> </soap:Body> </soap:Envelope> Thanks for any help.

    Read the article

  • Nasty redirect loop in WordPress (trailing slash, no trailing slash, and so on)

    - by Brett W. Thompson
    Hi, I read a ton of pages and tried lots of solutions but none have worked yet! My problem is that: test.asifa.net/asifa-wp Redirects to: test.asifa.net/asifa-wp/ Which redirects to the first page. What's a little bizarre is asifa-wp produces: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://test.asifa.net/asifa-wp/">here</a>.</p> </body></html> Whereas asifa-wp/ produces an empty page but the following headers (curl -v output): * About to connect() to test.asifa.net port 80 (#0) * Trying 69.163.203.138... connected * Connected to test.asifa.net (69.163.203.138) port 80 (#0) > GET /asifa-wp/ HTTP/1.1 > User-Agent: curl/7.18.2 (i386-redhat-linux-gnu) libcurl/7.18.2 NSS/3.12.0.3 zlib/1.2.3 libidn/0.6.14 libssh2/0.18 > Host: test.asifa.net > Accept: */* > < HTTP/1.1 301 Moved Permanently < Date: Sun, 13 Jun 2010 05:40:12 GMT < Server: Apache < X-Powered-By: PHP/5.2.13 < X-Pingback: http://test.asifa.net/asifa-wp/xmlrpc.php < Set-Cookie: _icl_current_language=en; expires=Mon, 14-Jun-2010 05:40:12 GMT; path=/asifa-wp/ < Location: http://test.asifa.net/asifa-wp < Vary: Accept-Encoding < Content-Length: 0 < Content-Type: text/html; charset=UTF-8 .htaccess looks like: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /asifa-wp/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /asifa-wp/index.php [L] </IfModule> # END WordPress Any help at all would be tremendously appreciated!!!

    Read the article

  • How to use Wordpress' http.php in external projects?

    - by NJTechGuy
    I am trying to parse data from a pipe-delimited text file hosted on another server which in turn will be inserted in a database. My host (1and1) disabled allow_url_fopen in php.ini I guess. Error message : Warning: fopen() [function.fopen]: URL file-access is disabled in the server configuration in Code : <? // make sure curl is installed if (function_exists('curl_init')) { // initialize a new curl resource $ch = curl_init(); // set the url to fetch curl_setopt($ch, CURLOPT_URL, 'http://abc.com/data/output.txt'); // don't give me the headers just the content curl_setopt($ch, CURLOPT_HEADER, 0); // return the value instead of printing the response to browser curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // use a user agent to mimic a browser curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0'); $content = curl_exec($ch); // remember to always close the session and free all resources curl_close($ch); } else { // curl library is not installed so we better use something else } //$contents = fread ($fd,filesize ($filename)); //fclose ($fd); $delimiter = "|"; $splitcontents = explode($delimiter, $contents); $counter = ""; ?> <font color="blue" face="arial" size="4">Complete File Contents</font> <hr> <? echo $contents; ?> <br><br> <font color="blue" face="arial" size="4">Split File Contents</font> <hr> <? foreach ( $splitcontents as $color ) { $counter = $counter+1; echo "<b>Split $counter: </b> $colorn<br>"; } ?> Wordpress has this cool http.php file. Is there a better way of doing it? If not, how do I use http.php for this task? Thank you guys..

    Read the article

  • Using WSH (VBS) with iMacros - how do they do it?

    - by Carl
    (iMacros For Firefox 6.6.5.0; Firefox 3.6.3; Windows XP Pro SP3 w/all updates) I made an iMacro to select "load next 25" (comments) on a web page (CNN.COM). Unfortunately, iMacros doesn't appear to do looping (do the above until that string doesn't appear on the page anymore - i.e. all the comments are loaded). I tried putting {!iloop} in the TAG command, and it didn't work - then I read it wouldn't. So I tried the example at http://wiki.imacros.net/Loop_after_Query_or_Login I can't find any information on how to actually run the script in the above example. I searched Google and found VBS scripting is handled with .wsh files with Windows XP Pro. (The examples and other references there say Windows does VBS natively, so I looked up how with Google.) So I made the following .wsh file (modified the above example): Option Explicit Dim iim1, iret 'initialize iMacros instance set iim1 = CreateObject ("imacros") iret = iim1.iimInit() do while not iret < 0 iret = iim1.iimPlay("Load All CNN Comments") loop ' tell user we're done msgbox "End." ' exit iMacros instance and quit script iret = iim1.iimExit() Wscript.Quit() Here's the iMacro: (Load All CNN Comments.iim) VERSION BUILD=6650406 RECORDER=FX TAG POS=1 TYPE=A ATTR=TXT:Load<SP>next<SP>25 WAIT SECONDS=#DOWNLOADCOMPLETE# The iMacro works by itself - I press Play (left iMacro panel) and the next 25 comments load on the CNN.com page in the current tab. I put the .wsh file in the ...\iMacros\Macros directory - with the iMacro "Load All CNN Comments.iim" When I run the .wsh file (by just double clicking on it's icon - I created it with Notepad, and Windows gave it an icon for that file type - it's executable) I get the message from "Windows Script Host" - "There is no script file specified." I wasn't actually expecting it to work, as I don't see how Windows would know to call iMacros to run the iim macro. It would be nice if there was a simple, COMPLETE, example of how to use a VBS script with iMacros, that isn't bogged down with unnecessary complication like filling in a form, loading multiple pages, etc. I can't find ANY example. So what do I need to do to get this to work? I just installed iMacros yesterday, because I am constantly having the problem that there are hundred of comments after a CNN.com article, and loading 25 more at a time until they are all on the page makes it impractical to read any replies to my comments. It would also be nice if I could run the Macro from Firefox, rather than by double clicking on some file somewhere. Thanks for any help.

    Read the article

  • How to launch a Windows service network process to listen to a port on a localhost socket that is vi

    - by rwired
    Here's the code (in a standard TService in Delphi): const ProcessExe = 'MyNetApp.exe'; function RunService: Boolean; var StartInfo : TStartupInfo; ProcInfo : TProcessInformation; CreateOK : Boolean; begin CreateOK := false; FillChar(StartInfo,SizeOf(TStartupInfo),#0); FillChar(ProcInfo,SizeOf(TProcessInformation),#0); StartInfo.cb := SizeOf(TStartupInfo); CreateOK := CreateProcess(nil, PChar(ProcessEXE),nil,nil,False, CREATE_NEW_PROCESS_GROUP+NORMAL_PRIORITY_CLASS, nil, PChar(InstallDir), StartInfo, ProcInfo); CloseHandle(ProcInfo.hProcess); CloseHandle(ProcInfo.hThread); Result := CreateOK; end; procedure TServicel.ServiceExecute(Sender: TService); const IntervalsBetweenRuns = 4; //no of IntTimes between checks IntTime = 250; //ms var Count: SmallInt; begin Count := IntervalsBetweenRuns; //first time run immediately while not Terminated do begin Inc(Count); if Count >= IntervalsBetweenRuns then begin Count := 0; //We check to see if the process is running, //if not we run it. That's all there is to it. //if ProcessEXE crashes, this service host will just rerun it if processExists(ProcessEXE)=0 then RunService; end; Sleep(IntTime); ServiceThread.ProcessRequests(False); end; end; MyNetApp.exe is a SOCKS5 proxy listening on port 9870. Users configure their browser to this proxy which acts as a secure-tunnel/anonymizer. All works perfectly fine on 2000/XP/2003, but on Vista/Win7 with UAC the service runs in Session0 under LocalSystem and port 9870 doesn't show up in netstat for the logged-in user or Administrator. Seems UAC is getting in my way. Is there something I can do with the SECURITY_ATTRIBUTES or CreateProcess, or is there something I can do with CreateProcessAsUser or impersonation to ensure that a network socket on a service is available to logged-in users on the system (note, this app is for mass deployment, I don't have access to user credentials, and require the user elevate their privileges to install a service on Vista/Win7)

    Read the article

  • PHP sessions and class members.

    - by JDW
    Ok, messing about with classes in PHP and can't get it to work the way I'm used to as a C++/Java-guy. In the "_init" funtion, if I run a query at the "// query works here" line", everythong works, but in the "getUserID" function, all that happens is said warning... "getUserID" gets called from login.php (they are in the same dir): login.php <?php include_once 'sitehandler.php'; include_once 'dbhandler.php'; session_start(); #TODO: Safer input handling $t_userName = $_POST["name"]; $t_userId = $_SESSION['handler']['db']->getUserID($t_userName); if ($t_userId != -1) { $_SESSION['user']['name'] = $t_userName; $_SESSION['user']['id'] = $t_userId; } //error_log("user: " . $_SESSION['user']['name'] . ", id: ". $_SESSION['user']['id']); header("Location: " . $_SERVER["HTTP_REFERER"]); ? dbhandler.php <?php include_once 'handler.php'; class DBHandler extends HandlerAbstract { private $m_handle; function __construct() { parent::__construct(); } public function test() { #TODO: isdir liquibase #TODO: isfile liquibase-195/liquibase + .bat + execrights $this->m_isTested = true; } public function _init() { if (!$this->isTested()) $this->test(); if (!file_exists('files/data.db')) { #TODO: How to to if host is Windows based? exec('./files/liquibase-1.9.5/liquibase --driver=org.sqlite.JDBC --changeLogFile=files/data_db.xml --url=jdbc:sqlite:files/data.db update'); #TODO: quit if not success } #TODO: Set with default data try { $this->m_handle = new SQLite3('files/data.db'); } catch (Exception $e) { die("<hr />" . $e->getMessage() . "<hr />"); } // query works here $this->m_isSetup = true; } public function teardown() { } public function getUserID($name) { // PHP Warning: SQLite3::prepare(): The SQLite3 object has not been correctly initialised in $t_statement = $this->m_handle->prepare("SELECT id FROM users WHERE name = :name"); $t_statement->bindValue(":name", $name, SQLITE3_TEXT); $t_result = $t_statement->execute(); //var_dump($this->m_handle); return ($t_result)? (int)$t_result['id']: -1; } }

    Read the article

  • Binding functions of derived class with luabind

    - by Anamon
    I am currently developing a plugin-based system in C++ which provides a Lua scripting interface, for which I chose to use luabind. I'm using Lua 5 and luabind 0.9, both statically linked and compiled with MSVC++ 8. I am now having trouble binding functions with luabind when they are defined in a derived class, but not its parent class. More specifically, I have an abstract base class called 'IPlugin' from which all plugin classes inherit. When the plugin manager initialises, it registers that class and its functions like this: luabind::open(L); luabind::module(L) [ luabind::class_("IPlugin") .def("start", (void(IPlugin::*)())&IPlugin::start) ]; As it is only known at runtime what effective plugin classes are available, I had to solve loading plugins in a kind of roundabout way. The plugin manager exposes a factory function to Lua, which takes the name of a plugin class and a desired object name. The factory then creates the object, registers the plugin's class as inheriting from the 'IPlugin' base class, and immediately calls a function on the created object that registers itself as a global with the Lua state, like this: void PluginExample::registerLuaObject(lua_State *L, string a_name) { luabind::globals(L)[a_name] = (PluginExample*)this; } I initially did this because I had problems with Lua determining the most derived class of the object, as if I register it from the StreamManager it is only known as a subtype of 'IPlugin' and not the specific subtype. I'm not sure anymore if this is even necessary though, but it works and the created object is subsequently accessible from Lua under 'a_name'. The problem I have, though, is that functions defined in the derived class, which were not declared at all in the parent class, cannot be used. Virtual functions defined in the base class, such as 'start' above, work fine, and calling them from Lua on the new object runs the respective redefined code from the 'PluginExample' class. But if I add a new function to 'PluginExample', here for example a function taking no arguments and returning void, and register it like this: luabind::module(L) [ luabind::class_("PluginExample") .def(luabind::constructor()) .def("func", &PluginExample::func) ]; calling 'func' on the new object yields the following Lua runtime error: No matching overload found, candidates: void func(PluginExample&) I am correctly using the ':' syntax so the 'self' argument is not needed and it seems suddenly Lua cannot determine the derived type of the object anymore. I am sure I am doing something wrong, probably having to do with the two-step binding required by my system architecture, but I can't figure out where. I'd much appreciate some help =)

    Read the article

  • Any tips on reducing wxWidgets application code size?

    - by Billy ONeal
    I have written a minimal wxWidgets application: stdafx.h #define wxNO_REGEX_LIB #define wxNO_XML_LIB #define wxNO_NET_LIB #define wxNO_EXPAT_LIB #define wxNO_JPEG_LIB #define wxNO_PNG_LIB #define wxNO_TIFF_LIB #define wxNO_ZLIB_LIB #define wxNO_ADV_LIB #define wxNO_HTML_LIB #define wxNO_GL_LIB #define wxNO_QA_LIB #define wxNO_XRC_LIB #define wxNO_AUI_LIB #define wxNO_PROPGRID_LIB #define wxNO_RIBBON_LIB #define wxNO_RICHTEXT_LIB #define wxNO_MEDIA_LIB #define wxNO_STC_LIB #include <wx/wxprec.h> Minimal.cpp #include "stdafx.h" #include <memory> #include <wx/wx.h> class Minimal : public wxApp { public: virtual bool OnInit(); }; IMPLEMENT_APP(Minimal) DECLARE_APP(Minimal) class MinimalFrame : public wxFrame { DECLARE_EVENT_TABLE() public: MinimalFrame(const wxString& title); void OnQuit(wxCommandEvent& e); void OnAbout(wxCommandEvent& e); }; BEGIN_EVENT_TABLE(MinimalFrame, wxFrame) EVT_MENU(wxID_ABOUT, MinimalFrame::OnAbout) EVT_MENU(wxID_EXIT, MinimalFrame::OnQuit) END_EVENT_TABLE() MinimalFrame::MinimalFrame(const wxString& title) : wxFrame(0, wxID_ANY, title) { std::auto_ptr<wxMenu> fileMenu(new wxMenu); fileMenu->Append(wxID_EXIT, L"E&xit\tAlt-X", L"Terminate the Minimal Example."); std::auto_ptr<wxMenu> helpMenu(new wxMenu); helpMenu->Append(wxID_ABOUT, L"&About\tF1", L"Show the about dialog box."); std::auto_ptr<wxMenuBar> bar(new wxMenuBar); bar->Append(fileMenu.get(), L"&File"); fileMenu.release(); bar->Append(helpMenu.get(), L"&Help"); helpMenu.release(); SetMenuBar(bar.get()); bar.release(); CreateStatusBar(2); SetStatusText(L"Welcome to wxWidgets!"); } void MinimalFrame::OnAbout(wxCommandEvent& e) { wxMessageBox(L"Some text about me!", L"About", wxOK, this); } void MinimalFrame::OnQuit(wxCommandEvent& e) { Close(); } bool Minimal::OnInit() { std::auto_ptr<MinimalFrame> mainFrame( new MinimalFrame(L"Minimal wxWidgets Application")); mainFrame->Show(); mainFrame.release(); return true; } This minimal program weighs in at 2.4MB! (Executable compression drops this to half a MB or so but that's still HUGE!) (I must statically link because this application needs to be single-binary-xcopy-deployed, so both the C runtime and wxWidgets itself are set for static linking) Any tips on cutting this down? (I'm using Microsoft Visual Studio 2010)

    Read the article

  • Java getInputStreat SocketTimeoutException instead of NoRouteToHostException

    - by Jon
    I have an odd issue happening when trying to open multiple Input Streams (in separate threads) on Linux (RHEL). The behaviour works as expected on windows. I am kicking off 3 threads to open https connections to 3 different servers. All three are invalid IP addresses (in this test case), so I expect an NoRouteToHostException for each of them. The first two return these as expected, and quite quickly. (see stack trace below) However the third (and 4th when I tested it that way) do NOT give a no route exception. They wait for ages, and then give a SocketTimeoutException (see other stack trace below). This takes ages to come back, and does not accurately express the connection issue. The offending line of code is: reader = new BufferedReader(new InputStreamReader(conn.getInputStream())); Has anyone seen something like this before? Are there multi-threading issues with sockets on REHL or some limit somewhere to how many can connect at once...or...something? Expected stack trace, as received for first two: java.net.NoRouteToHostException: No route to host at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:272) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:916) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234) Unexpected stack trace, as received on 3rd: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:272) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:916) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234)

    Read the article

  • How can a 1Gb Java heap on a 64bit machine use 3Gb of VIRT space?

    - by Graeme Moss
    I run the same process on a 32bit machine as on a 64bit machine with the same memory VM settings (-Xms1024m -Xmx1024m) and similar VM version (1.6.0_05 vs 1.6.0_16). However the virtual space used by the 64bit machine (as shown in top under "VIRT") is almost three times as big as that in 32bit! I know 64bit VMs will use a little more memory for the larger references, but how can it be three times as big? Am I reading VIRT in top incorrectly? Full data shown below, showing top and then the result of jmap -heap, first for 64bit, then for 32bit. Note the VIRT for 64bit is 3319m for 32bit is 1220m. * 64bit * PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22534 agent 20 0 3319m 163m 14m S 4.7 2.0 0:04.28 java $ jmap -heap 22534 Attaching to process ID 22534, please wait... Debugger attached successfully. Server compiler detected. JVM version is 10.0-b19 using thread-local object allocation. Parallel GC with 4 thread(s) Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 1073741824 (1024.0MB) NewSize = 2686976 (2.5625MB) MaxNewSize = -65536 (-0.0625MB) OldSize = 5439488 (5.1875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 21757952 (20.75MB) MaxPermSize = 88080384 (84.0MB) Heap Usage: PS Young Generation Eden Space: capacity = 268500992 (256.0625MB) used = 247066968 (235.62142181396484MB) free = 21434024 (20.441078186035156MB) 92.01715277089181% used From Space: capacity = 44695552 (42.625MB) used = 0 (0.0MB) free = 44695552 (42.625MB) 0.0% used To Space: capacity = 44695552 (42.625MB) used = 0 (0.0MB) free = 44695552 (42.625MB) 0.0% used PS Old Generation capacity = 715849728 (682.6875MB) used = 0 (0.0MB) free = 715849728 (682.6875MB) 0.0% used PS Perm Generation capacity = 21757952 (20.75MB) used = 16153928 (15.405586242675781MB) free = 5604024 (5.344413757324219MB) 74.24378912132907% used * 32bit * PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30168 agent 20 0 1220m 175m 12m S 0.0 2.2 0:13.43 java $ jmap -heap 30168 Attaching to process ID 30168, please wait... Debugger attached successfully. Server compiler detected. JVM version is 14.2-b01 using thread-local object allocation. Parallel GC with 8 thread(s) Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 1073741824 (1024.0MB) NewSize = 1048576 (1.0MB) MaxNewSize = 4294901760 (4095.9375MB) OldSize = 4194304 (4.0MB) NewRatio = 8 SurvivorRatio = 8 PermSize = 16777216 (16.0MB) MaxPermSize = 67108864 (64.0MB) Heap Usage: PS Young Generation Eden Space: capacity = 89522176 (85.375MB) used = 80626352 (76.89128112792969MB) free = 8895824 (8.483718872070312MB) 90.0629940005033% used From Space: capacity = 14876672 (14.1875MB) used = 14876216 (14.187065124511719MB) free = 456 (4.3487548828125E-4MB) 99.99693479832048% used To Space: capacity = 14876672 (14.1875MB) used = 0 (0.0MB) free = 14876672 (14.1875MB) 0.0% used PS Old Generation capacity = 954466304 (910.25MB) used = 10598496 (10.107513427734375MB) free = 943867808 (900.1424865722656MB) 1.1104107034039412% used PS Perm Generation capacity = 16777216 (16.0MB) used = 11366448 (10.839889526367188MB) free = 5410768 (5.1601104736328125MB) 67.74930953979492% used

    Read the article

  • compiling a program to run in DOS mode

    - by dygi
    I write a simple program, to run in DOS mode. Everything works under emulated console in Win XP / Vista / Seven, but not in DOS. The error says: this program caonnot be run in DOS mode. I wonder is that a problem with compiler flags or something bigger. For programming i use Code::Blocks v 8.02 with such settings for compilation: -Wall -W -pedantic -pedantic-errors in Project \ Build options \ Compiler settings I've tried a clean DOS mode, booting from cd, and also setting up DOS in Virtual Machine. The same error appears. Should i turn on some more compiler flags ? Some specific 386 / 486 optimizations ? UPDATE Ok, i've downloaded, installed and configured DJGPP. Even resolved some problems with libs and includes. Still have two questions. 1) i can't compile a code, which calls _strdate and _strtime, i've double checked the includes, as MSDN says it needs time.h, but still error says: _strdate was not declared in this scope, i even tried to add std::_strdate, but then i have 4, not 2 errors sazing the same 2) the 2nd code is about gotoxy, it looks like that: #include <windows.h> void gotoxy(int x, int y) { COORD position; position.X = x; position.Y = y; SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), position); } error says there is no windows.h, so i've put it in place, but then there are many more errors saying some is missing from windows.h, I SUPPOSE it won't work because this functions is strictly for windows right ? is there any way to write similar gotoxy for DOS ? UPDATE2 1) solved using time(); instead of _strdate(); and _strtime(); here's the code time_t rawtime; struct tm * timeinfo; char buffer [20]; time ( &rawtime ); timeinfo = localtime ( &rawtime ); strftime (buffer,80,"%Y.%m.%d %H:%M:%S\0",timeinfo); string myTime(buffer); It now compiles under DJGPP. UPDATE3 Still need to solve a code using gotoxy - replaced it with some other code that compiles (under DJGPP). Thank You all for help. Just learnt some new things about compiling (flags, old IDE's like DJGPP, OpenWatcom) and refreshed memories setting DOS to work :--)

    Read the article

  • iPhone reachability checking

    - by Sneakyness
    I've found several examples of code to do what I want (check for reachability), but none of it seems to be exact enough to be of use to me. I can't figure out why this doesn't want to play nice. I have the reachability.h/m in my project, I'm doing #import <SystemConfiguration/SystemConfiguration.h> And I have the framework added. I also have: #import "Reachability.h" at the top of the .m in which I'm trying to use the reachability. Reachability* reachability = [Reachability sharedReachability]; [reachability setHostName:@"http://www.google.com"]; // set your host name here NetworkStatus remoteHostStatus = [reachability remoteHostStatus]; if(remoteHostStatus == NotReachable) {NSLog(@"no");} else if (remoteHostStatus == ReachableViaWiFiNetwork) {NSLog(@"wifi"); } else if (remoteHostStatus == ReachableViaCarrierDataNetwork) {NSLog(@"cell"); } This is giving me all sorts of problems. What am I doing wrong? I'm an alright coder, I just have a hard time when it comes time to figure out what needs to be put where to enable what I want to do, regardless if I want to know what I want to do or not. (So frustrating) Update: This is what's going on. This is in my viewcontroller, which I have the #import <SystemConfiguration/SystemConfiguration.h> and #import "Reachability.h" set up with. This is my least favorite part of programming by far. FWIW, we never ended up implementing this in our code. The two features that required internet access (entering the sweepstakes, and buying the dvd), were not main features. Nothing else required internet access. Instead of adding more code, we just set the background of both internet views to a notice telling the users they must be connected to the internet to use this feature. It was in theme with the rest of the application's interface, and was done well/tastefully. They said nothing about it during the approval process, however we did get a personal phone call to verify that we were giving away items that actually pertained to the movie. According to their usually vague agreement, you aren't allowed to have sweepstakes otherwise. I would also think this adheres more strictly to their "only use things if you absolutely need them" ideaology as well. Here's the iTunes link to the application, EvoScanner.

    Read the article

< Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >