Search Results

Search found 4652 results on 187 pages for 'explicit constructor'.

Page 97/187 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Break a class in twain, or impose an interface for restricted access?

    - by bedwyr
    What's the best way of partitioning a class when its functionality needs to be externally accessed in different ways by different classes? Hopefully the following example will make the question clear :) I have a Java class which accesses a single location in a directory allowing external classes to perform read/write operations to it. Read operations return usage stats on the directory (e.g. available disk space, number of writes, etc.); write operations, obviously, allow external classes to write data to the disk. These methods always work on the same location, and receive their configuration (e.g. which directory to use, min disk space, etc.) from an external source (passed to the constructor). This class looks something like this: public class DiskHandler { public DiskHandler(String dir, int minSpace) { ... } public void writeToDisk(String contents, String filename) { int space = getAvailableSpace(); ... } public void getAvailableSpace() { ... } } There's quite a bit more going on, but this will do to suffice. This class needs to be accessed differently by two external classes. One class needs access to the read operations; the other needs access to both read and write operations. public class DiskWriter { DiskHandler diskHandler; public DiskWriter() { diskHandler = new DiskHandler(...); } public void doSomething() { diskHandler.writeToDisk(...); } } public class DiskReader { DiskHandler diskHandler; public DiskReader() { diskHandler = new DiskHandler(...); } public void doSomething() { int space = diskHandler.getAvailableSpace(...); } } At this point, both classes share the same class, but the class which should only read has access to the write methods. Solution 1 I could break this class into two. One class would handle read operations, and the other would handle writes: // NEW "UTILITY" CLASSES public class WriterUtil { private ReaderUtil diskReader; public WriterUtil(String dir, int minSpace) { ... diskReader = new ReaderUtil(dir, minSpace); } public void writeToDisk(String contents, String filename) { int = diskReader.getAvailableSpace(); ... } } public class ReaderUtil { public ReaderUtil(String dir, int minSpace) { ... } public void getAvailableSpace() { ... } } // MODIFIED EXTERNALLY-ACCESSING CLASSES public class DiskWriter { WriterUtil diskWriter; public DiskWriter() { diskWriter = new WriterUtil(...); } public void doSomething() { diskWriter.writeToDisk(...); } } public class DiskReader { ReaderUtil diskReader; public DiskReader() { diskReader = new ReaderUtil(...); } public void doSomething() { int space = diskReader.getAvailableSpace(...); } } This solution prevents classes from having access to methods they should not, but it also breaks encapsulation. The original DiskHandler class was completely self-contained and only needed config parameters via a single constructor. By breaking apart the functionality into read/write classes, they both are concerned with the directory and both need to be instantiated with their respective values. In essence, I don't really care to duplicate the concerns. Solution 2 I could implement an interface which only provisions read operations, and use this when a class only needs access to those methods. The interface might look something like this: public interface Readable { int getAvailableSpace(); } The Reader class would instantiate the object like this: Readable diskReader; public DiskReader() { diskReader = new DiskHandler(...); } This solution seems brittle, and prone to confusion in the future. It doesn't guarantee developers will use the correct interface in the future. Any changes to the implementation of the DiskHandler could also need to update the interface as well as the accessing classes. I like it better than the previous solution, but not by much. Frankly, neither of these solutions seems perfect, but I'm not sure if one should be preferred over the other. I really don't want to break the original class up, but I also don't know if the interface buys me much in the long run. Are there other solutions I'm missing?

    Read the article

  • Performance issues with jms and spring integration. What is wrong with the following configuration?

    - by user358448
    I have a jms producer, which generates many messages per second, which are sent to amq persistent queue and are consumed by single consumer, which needs to process them sequentially. But it seems that the producer is much faster than the consumer and i am having performance and memory problems. Messages are fetched very very slowly and the consuming seems to happen on intervals (the consumer "asks" for messages in polling fashion, which is strange?!) Basically everything happens with spring integration. Here is the configuration at the producer side. First stake messages come in stakesInMemoryChannel, from there, they are filtered throw the filteredStakesChannel and from there they are going into the jms queue (using executor so the sending will happen in separate thread) <bean id="stakesQueue" class="org.apache.activemq.command.ActiveMQQueue"> <constructor-arg name="name" value="${jms.stakes.queue.name}" /> </bean> <int:channel id="stakesInMemoryChannel" /> <int:channel id="filteredStakesChannel" > <int:dispatcher task-executor="taskExecutor"/> </int:channel> <bean id="stakeFilterService" class="cayetano.games.stake.StakeFilterService"/> <int:filter input-channel="stakesInMemoryChannel" output-channel="filteredStakesChannel" throw-exception-on-rejection="false" expression="true"/> <jms:outbound-channel-adapter channel="filteredStakesChannel" destination="stakesQueue" delivery-persistent="true" explicit-qos-enabled="true" /> <task:executor id="taskExecutor" pool-size="100" /> The other application is consuming the messages like this... The messages come in stakesInputChannel from the jms stakesQueue, after that they are routed to 2 separate channels, one persists the message and the other do some other stuff, lets call it "processing". <bean id="stakesQueue" class="org.apache.activemq.command.ActiveMQQueue"> <constructor-arg name="name" value="${jms.stakes.queue.name}" /> </bean> <jms:message-driven-channel-adapter channel="stakesInputChannel" destination="stakesQueue" acknowledge="auto" concurrent-consumers="1" max-concurrent-consumers="1" /> <int:publish-subscribe-channel id="stakesInputChannel" /> <int:channel id="persistStakesChannel" /> <int:channel id="processStakesChannel" /> <int:recipient-list-router id="customRouter" input-channel="stakesInputChannel" timeout="3000" ignore-send-failures="true" apply-sequence="true" > <int:recipient channel="persistStakesChannel"/> <int:recipient channel="processStakesChannel"/> </int:recipient-list-router> <bean id="prefetchPolicy" class="org.apache.activemq.ActiveMQPrefetchPolicy"> <property name="queuePrefetch" value="${jms.broker.prefetch.policy}" /> </bean> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <property name="targetConnectionFactory"> <bean class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL" value="${jms.broker.url}" /> <property name="prefetchPolicy" ref="prefetchPolicy" /> <property name="optimizeAcknowledge" value="true" /> <property name="useAsyncSend" value="true" /> </bean> </property> <property name="sessionCacheSize" value="10"/> <property name="cacheProducers" value="false"/> </bean>

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Problems installing Windows service via Group Policy in a domain

    - by CraneStyle
    I'm reasonably new to Group Policy administration and I'm trying to deploy an MSI installer via Active Directory to install a service. In reality, I'm a software developer trying to test how my service will be installed in a domain environment. My test environment: Server 2003 Domain Controller About 10 machines (between XP SP3, and server 2008) all joined to my domain. No real other setup, or active directory configuration has been done apart from things like getting DNS right. I suspect that I may be missing a step in Group Policy that says I need to grant an explicit permission somewhere, but I have no idea where that might be or what it will say. What I've done: I followed the documentation from Microsoft in How to Deploy Software via Group Policy, so I believe all those steps are correct (I used the UNC path, verified NTFS permissions, I have verified the computers and users are members of groups that are assigned to receive the policy etc). If I deploy the software via the Computer Configuration, when I reboot the target machine I get the following: When the computer starts up it logs Event ID 108, and says "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was: An operations error occurred." There are no previous log entries to check, which is weird because if it ever actually tried to invoke the windows installer it should log any sort of failure of my application's installer. If I open a command prompt and manually run: msiexec /qb /i \\[host]\[share]\installer.msi It installs the service just fine. If I deploy the software via the User Configuration, when I log that user in the Event Log says that software changes were applied successfully, but my service isn't installed. However, when deployed via the User configuration even though it's not installed when I go to Control Panel - Add/Remove Programs and click on Add New Programs my service installer is being advertised and I can install/remove it from there. (this does not happen when it's assigned to computers) Hopefully that wall of text was enough information to get me going, thanks all for the help.

    Read the article

  • PHP - DOM class - numbered entities and encodings problem

    - by user343607
    Hi guys, I'm having some difficult with PHP DOM class. I am making a sitemap script, and I need the output of $doc-saveXML() to be like <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&#xE7;os/redesign</loc> </url> </root> or <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&#231;os/redesign</loc> </url> </root> but I am getting: <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&amp;#xE7;os/redesign</loc> </url> </root> This is the closet I could get, using a replace named to numbered entities function. I was also able to reproduce <?xml version="1.0" ?> <root> <url> <loc>http://www.somesite.com/servi&amp;#xE7;os/redesign</loc> </url> </root> But without the encoding specified. The best solution (the way I think the code should be written) would be: <?php $myArray = array(); // do some stuff to populate the with URL strings $doc = new DOMDocument('1.0', 'UTF-8'); // here we modify some property. Maybe is the answer I am looking for... $urlset = doc->createElement("urlset"); $urlset = $doc->appendChild($urlset); foreach($myArray as $address) { $url = $doc->createElement("url"); $url = $urlset->appendChild($url); $loc = $doc->createElement("loc"); $loc = $url->appendChild($loc); $valueContent = $doc->createTextNode($value); $valueContent = $loc->appendChild($address); } echo $doc->saveXML(); ?> Notes: Server response header contains charset as UTF-8; PHP script is saved in UTF-8; URLs read are UTF-8 strings; Above script contains encoding declaration on DOMDocument constructor, and does not use any convert functions, like htmlentities, urlencode, utf8_encode... I've tried changing the DOMDocument properties DOMDocument::$resolveExternals and DOMDocument::$substituteEntities values. None combinations worked. And yes, I know I can made all process without specifying the character set on DOMDocument constructor, dump string content into a variable and make a very simple string substitution with string replace functions. This works. But I would like to know where I am slipping, how can this be made using native API's and settings, or even if this is possible. Thanks in advance.

    Read the article

  • Connect trough remote computer connection

    - by Didac
    First, sorry for my english and my poor knowlodge of this subject. I have a dedicated server placed in Germany (windows 2008 R2) and I live in spain. I would like to access internet from my home computer (Windows 7 Pro x64), trough my server in Germany, so I can use a German IP, what I need some times. I have complete acces in to both computers, but I just don't know where to start. (My knwoledge is limited to software development :/ ) I'd like to know where to start, if I need to create a VPN and so.. Thanks in advance! Update 1 I tried a lot of options of OpenVPN, but I sadly I know nothing abuot networking, so I have to accept I do not know what I'm doing :( Here are my config files (note most of the options are from the sample config files). server.conf #server config file start port 1194 proto udp dev tun server 10.0.0.0 255.255.255.224 #you may choose any subnet. 10.0.0.x is used for this example. ca "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\server.crt" key "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\server.key" dh "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\dh1024.pem" push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" #the following commands are optional keepalive 10 120 comp-lzo persist-key persist-tun verb 5 #config file ends client.conf #client config file start client dev tun proto udp remote 176.9.99.180 1194 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\client1.crt" key "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\client1.key" ns-cert-type server comp-lzo verb 5 explicit-exit-notify 2 ping 10 ping-restart 60 route-method exe route-delay 2 # end of client config file And here's the server's network settings: IP address: 176.9.99.180 Subnet mask: 255.255.255.224 Default gateway: 176.9.99.161 Preferred DNS server: 127.0.0.1

    Read the article

  • Array not showing in Jlist but filled in console

    - by OVERTONE
    Hey there. been a busy debugger today. ill give ths short version. ive made an array list that takes names from a database. then i put the contents of the arraylist into an array of strings. now i want too display the arrays contents in a JList. the weird thing is it was working earlier. and ive two methods. ones just a little practice too make sure i was adding to the Jlist correctly. so heres the key codes. this is the layout of my code. variables constructor methods in my variables i have these 3 defined String[] contactListNames = new String[5]; ArrayList<String> rowNames = new ArrayList<String>(); JList contactList = new JList(contactListNames); simple enough. in my constructor i have them again. contactListNames = new String[5]; contactList = new JList(contactListNames); //i dont have the array list defined though. printSqlDetails(); // the prinSqldetails was too make sure that the connectionw as alright. and its working fine. fillContactList(); // this is the one thats causing me grief. its where all the work happens. // fillContactListTest(); // this was the tester that makes sure its adding to the list alright. heres the code for fillContactListTest() public void fillContactListTest() { for(int i = 0;i<3;i++) { try { String contact; System.out.println(" please fill the list at index "+ i); Scanner in = new Scanner(System.in); contact = in.next(); contactListNames[i] = contact; in.nextLine(); } catch(Exception e) { e.printStackTrace(); } } } heres the main one thats supposed too work. public void fillContactList() { int i =0; createConnection(); ArrayList<String> rowNames = new ArrayList<String>(); try { Statement stmt = conn.createStatement(); ResultSet namesList = stmt.executeQuery("SELECT name FROM Users"); try { while (namesList.next()) { rowNames.add(namesList.getString(1)); contactListNames =(String[])rowNames.toArray(new String[rowNames.size()]); // this used to print out contents of array list // System.out.println("" + rowNames); while(i<contactListNames.length) { System.out.println(" " + contactListNames[i]); i++; } } } catch(SQLException q) { q.printStackTrace(); } conn.commit(); stmt.close(); conn.close(); } catch(SQLException e) { e.printStackTrace(); } } i really need help here. im at my wits end. i just cant see why the first method would add to the JList no problem. but the second one wont. both the contactListNames array and array list can print fine and have the names in them. but i must be transfering them too the jlist wrong. please help p.s im aware this is long. but trust me its the short version.

    Read the article

  • Getting a seg fault, having trouble with classes and variables.

    - by celestialorb
    Ok, so I'm still learning the ropes of C++ here so I apologize if this is a simple mistake. I have this class: class RunFrame : public wxFrame { public: RunFrame(); void OnKey(wxKeyEvent& keyEvent); private: // Configuration variables. const wxString *title; const wxPoint *origin; const wxSize *size; const wxColour *background; const wxColour *foreground; const wxString *placeholder; // Control variables. wxTextCtrl *command; // Event table. DECLARE_EVENT_TABLE() }; ...then in the OnKey method I have this code: void RunFrame::OnKey(wxKeyEvent& keyEvent) { // Take the key and process it. if(WXK_RETURN == keyEvent.GetKeyCode()) { bool empty = command -> IsEmpty(); } // Propogate the event through. keyEvent.Skip(); } ...but my program keeps seg faulting when it reaches the line where I attempt to call the IsEmpty method from the command variable. My question is, "Why?" In the constructor of the RunFrame class I can seemingly call methods for the command variable in the same way I'm doing so in the OnKey method...and it compiles correctly, it just seg faults on me when it attempts to execute that line. Here is the code for the constructor if necessary: RunFrame::RunFrame() : wxFrame(NULL, wxID_ANY, wxT("DEFAULT"), wxDefaultPosition, wxDefaultSize, wxBORDER_NONE) { // Create the styling constants. title = new wxString(wxT("RUN")); origin = new wxPoint(0, 0); size = new wxSize(250, 25); background = new wxColour(33, 33, 33); foreground = new wxColour(255, 255, 255); placeholder = new wxString(wxT("command")); // Set the styling for the frame. this -> SetTitle(*title); this -> SetSize(*size); // Create the panel and attach the TextControl to it. wxPanel *panel = new wxPanel(this, wxID_ANY, *origin, *size, wxBORDER_NONE); // Create the text control and attach it to the panel. command = new wxTextCtrl(panel, wxID_ANY, *placeholder, *origin, *size); // Set the styling for the text control. command -> SetBackgroundColour(*background); command -> SetForegroundColour(*foreground); // Connect the key event to the text control. command -> Connect(wxEVT_CHAR, wxKeyEventHandler(RunFrame::OnKey)); // Set the focus to the command box. command -> SetFocus(); } Thanks in advance for any help you can give! Regards, celestialorb

    Read the article

  • SQL-Server 2008 : Table Insert and Range Check ?

    - by LB .
    I'm using the Table Value constructor to insert a bunch of rows at a time. However if i'm using sql replication, I run into a range check constraint on the publisher on my id column managed automatically. The reason is the fact that the id range doesn't seem to be increased during an insert of several values, meaning that the max id is reached before the actual range expansion could occur (or the id threshold). It looks like this problem for which the solution is either running the merge agent or run the sp_adjustpublisheridentityrange stored procedure. I'm litteraly doing something like : INSERT INTO dbo.MyProducts (Name, ListPrice) VALUES ('Helmet', 25.50), ('Wheel', 30.00), ((SELECT Name FROM Production.Product WHERE ProductID = 720), (SELECT ListPrice FROM Production.Product WHERE ProductID = 720)); GO What are my options (if I don't want or can't adopt any of the proposed solution) ? Expand the range ? Decrease the threshold ? Can I programmatically modify my request to circumvent this problem ? thanks.

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Perfectly reproducable select statement default ordering issue....

    - by Dave
    Hi, I've recently been chasing an issue with a client's db... solution found, but impossible to recreate. Essentially, we're doing a Select * from mytable where ArbitraryColumn = 75 Where MyTable has an Identity column, called 'MyIndentityColumn' - incremented by one in each insert. Naturally, and normally I would assume that the order returned would be the order in which they are inserted (bad assumption, but one which was forced onto me, through an inherited application - which has been patched). Essentially, I would like suggestions as to why the database, when restored to my local machine (same OS, same SQL server version - 200 sp3) same collation, and same backup instance restored on it, as a test DB on the client site. When I perform the above select, I get them in order of insert (i.e. identity column ordered ascending). On the client, it seems random (but the same 'random' order each time)... A few other points: I have the same collation on my test server as client Same DB backup restored to a test only I can access Same SQL server version and service pack Same OS Test DB is a new DB - new log and MDF... I have the problem 'solved' by adding an explicit order by clause but I want to undertand the cause of the issue, given the exact nature of my attempts to recreate it beuing futile, and perfectly recreatable on the client server... Thanks in advance, Dave

    Read the article

  • SQL Server Read Locking behavior

    - by Charles Bretana
    When SQL Server Books online says that "Shared (S) locks on a resource are released as soon as the read operation completes, unless the transaction isolation level is set to repeatable read or higher, or a locking hint is used to retain the shared (S) locks for the duration of the transaction." Assuming we're talking about a row-level lock, with no explicit transaction, at default isolation level (Read Committed), what does "read operation" refer to? The reading of a single row of data? The reading of a single 8k IO Page ? or until the the complete Select statement in which the lock was created has finished executing, no matter how many other rows are involved? NOTE: The reason I need to know this is we have a several second read-only select statement generated by a data layer web service, which creates page-level shared read locks, generating a deadlock due to conflicting with row-level exclusive update locks from a replication prcoess that keeps the server updated. The select statement is fairly large, with many sub-selects, and one DBA is proposing that we rewrite it to break it up into multiple smaller statements (shorter running pieces), "to cut down on how long the locks are held". As this assumes that the shared read locks are held till the complete select statement has finished, if that is wrong (if locks are released when the row, or the page is read) then that approach would have no effect whatsoever....

    Read the article

  • Share one ssl certificate between multiples vhost

    - by Cesar
    I have a setup like this: <VirtualHost 192.168.1.104:80> ServerName domain1 DocumentRoot /home/domain/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> ServerName domain2 DocumentRoot /home/domain2/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> DocumentRoot /home/domain3/public_html ServerName domain3 ... </VirtualHost> <VirtualHost 192.168.1.104:443> ServerName domain3 SSLCertificateFile /usr/share/ssl/certs/certificate.crt SSLCertificateKeyFile /usr/share/ssl/private/private.key SSLCACertificateFile /usr/share/ssl/certs/bundle.cabundle ... </VirtualHost> I want to use domain3 certificate in the other domains, preferably without having to repeat all the <VirtualHost 192.168.1.104:443> config. In other words I want something like this: If the vhost has no explicit ssl config use cert for domain3 (/usr/share/ssl/certs/certificate.crt) Notes: 1.- I for sure will be setting more vhosts in the future 2.- I know (and don't care) of the ssl warnings the browser will show (hostname mismatch) If this possible? how?

    Read the article

  • Cisco access-list confusion

    - by LonelyLonelyNetworkN00b
    I'm having troubles implementing access-lists on my asa 5510 (8.2) in a way that makes sense for me. I have one access-list for every interface i have on the device. The access-lists are added to the interface via the access-group command. let's say I have these access-lists access-group WAN_access_in in interface WAN access-group INTERNAL_access_in in interface INTERNAL access-group Production_access_in in interface PRODUCTION WAN has security level 0, Internal Security level 100, Production has security level 50. What i want to do is have an easy way to poke holes from Production to Internal. This seams to be pretty easy, but then the whole notion of security levels doesn't seam to matter any more. I then can't exit out the WAN interface. I would need to add an ANY ANY access-list, which in turn opens access completely for the INTERNAL net. I could solve this by issuing explicit DENY ACEs for my internal net, but that sounds like quite the hassle. How is this done in practice? In iptables i would use a logic of something like this. If source equals production-subnet and outgoing interface equals WAN. ACCEPT.

    Read the article

  • VMware Workstation Bridged Network Host UnReachable

    - by user2097818
    VMware Workstation 7 on Win7-64 (Home Premium). I have confirmed this on any guest running on this machine (from winxp to debian). I am using a bridged network connection for my guests (Automatic on VMnet0). All of the network configuration is done with DHCP (including on the host). Problem What I can not do: Ping my host machine from inside any VM. (either shows me "Destination Host Unreachable" or will just timeout) What I CAN do right after power up, with no problems at all. I can connect to the internet from inside the VM I can ping my router from inside the VM I can ping other machines on my network from inside the VM Other machines can ping the VM Other machines can ping the host My host machine can ping the VM (this one is important. read further) Details So I have my router assigned as 192.168.2.1/255.255.255.0, and the router provides the DHCP service (and it seems to be doing so successfully). There are no IP conflicts on the network that I am aware of. All Gateways and Subnet masks are appropriate and matching. My entire workshop is on one single subnet, with one single DHCP server and gateway. There is one method in which I can ping successfully, but it requires an active connection initiated from the host (I start pinging from host to VM). During the period of the active connection, I can successfully ping from VM to host, using explicit IP address. As soon as the host connection is closed, the VM ping starts hanging with the same old messages. My Thoughts This really feels like a firewall problem, but I have turned off all firewalls on host and VM, powered down the network, powered back up, and the problem still persists. And if it was firewall, why would only the IP address associated with bridged VM networks be blocked. I feel as though my host operating system (Win7) is somehow configured incorrectly, or, VMware Workstation is configured incorrectly from the host side. Although I have done my best to put everything in default, I feel like I am missing something silly.

    Read the article

  • Can't access site internally, but DNS works

    - by BloodyIron
    1) I have apache2 running a vhost for a website. 2) This apache2 instance is already successfuly setup for other websites on it to be accessible internally and externally. 3) I am using an internal bind9 server to resolve the new website's domain internally to the private IP. This bind9 server is not public facing, nor is it the master server on the internet. 4) The DNS internally resolves to the right IP. 5) Firefox reports "server not found". 6) I have copied the config almost identically to other configs that are known to work (adjusting for proper paths of course). In turn I have reloaded and restarted apache2 repeatedly. 7) I have an entry to forward .org .info .net alternative TLDs to .com in the vhost config for this domain, and my browser goes from .org to .com despite note #5. 8) /var/log/apache2/access.log shows when someone externally tries to access the site, but no activity is observed when someone tries to access internally. Changing the log level does not appear to improve the situation. 9) I am out of ideas, nothing appears to be wrong. Please help? To be explicit. Why is this new site unreachable internally? I would like to clarify on something, even though I have already outlined this. YES I know this system is in a private network. NO it is not going through a router. YES I am using an internal DNS server (bind9) to resolve, and YES it does resolve to the proper internal IP. YES other websites on the same server setup in the same way with internal resolution work right now and have done for a while. Everything for this domain is setup the same as the other working domains as far as I can tell. The other working domains are internally AND externally accessible. This domain I am working with is only currently externally accessible. When I go to it internally firefox tells me "Server not found".

    Read the article

  • Apache Named Virtual Hosts and HTTPS

    - by Freddie Witherden
    I have an SSL certificate which is valid for multiple (sub-) domains. In Apache I have configured this as follows: In /etc/apache2/apache2.conf NameVirtualHost <my ip>:443 Then for one named virtual host I have <VirtualHost <my ip>:443> ServerName ... SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... SSLCACertificateFile ... </VirtualHost> Finally, for every other site I want to be accessible over HTTPS I just have a <VirtualHost <my ip>:443> ServerName ... </VirtualHost> The good news is that it works. However, when I start Apache I get warning messages [warn] Init: SSL server IP/port conflict: Domain A:443 (...) vs. Domain B:443 (...) [warn] Init: SSL server IP/port conflict: Domain C:443 (...) vs. Domain B:443 (...) [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! So, my question is: how should I be configuring this? Clearly from the warning messages I am doing something wrong (although it does work!), however, the above configuration was the only one I could get to work. It is somewhat annoying as the configuration files have an explicit dependence on my IP address.

    Read the article

  • Not sure why I'm getting a NullPointerException when creating a Swing component

    - by Alex
    The error occurs when creating the Box object. public void drawBoard(Board board){ for(int row = 0; row < 8; row++){ for(int col = 0; col < 8; col++){ Box box = new Box(board.getSquare(col, row).getColour(), col, row); squarePanel[col][row].add(box); } } Board is given from the Game constructor here (another class): public Game() throws Throwable{ View graphics = new View(); board = new Board(); board.setDefault(); graphics.drawBoard(board); } The Board constructor looks like this: public Board(){ grid = new Square[COLUMNS][ROWS]; for(int row = 0; row < 8; row++){ for(int col = 0; col < 8; col++){ grid[col][row] = new Square(this); } } for(int row = 0; row < 8; row++){ for(int col = 0; col < 4; col++){ int odd = 2*col + 1; int even = 2*col; getSquare(odd, row).setColour(Color.BLACK); getSquare(even, row).setColour(Color.WHITE); } } } And finally the Box class: class Box extends JComponent{ Color boxColour; int col, row; public Box(Color boxColour, int col, int row){ this.boxColour = boxColour; this.col = col; this.row = row; repaint(); } public void paint(Graphics drawBox){ drawBox.setColor(boxColour); drawBox.drawRect(50*col, 50*row, 50, 50); drawBox.fillRect(50*col, 50*row, 50, 50); } } So while looping through the array, it uses the two integers as coordinates to create the Box. The coordinates are referenced and then repaint() is run. The box also gets the colour, using the two integers, from the Square in the Board class. Since the colour is already set, before the drawBoard(board) method is run, that shouldn't be a problem, right? Exception in thread "main" java.lang.NullPointerException at View.drawBoard(View.java:38) at Game.<init>(Game.java:21) at Game.main(Game.java:14) The relevant part of Square import java.awt.Color; public class Square { private Piece piece; private Board board; private Color squareColour; public Square(Board board){ this.board = board; } public void setColour(Color squareColour){ this.squareColour = squareColour; } public Color getColour(){ return squareColour; }

    Read the article

  • Calling Object Methods in Code

    - by Mister R2
    I'm a bit new to PHP, and I'm more experienced with strongly-typed languages such as JAVA, C# or C++.I'm currently writing a web tool in PHP, and I am having an issue trying to do what I want. The simple idea of what I want to do in code is run through some emails I used PHP-IMAP to get. I then create email objects (a class I defined), and put them in an array. Later on the code, however, I cycle through those emails to display them. However, as you might have guessed I'd have an issue with, I try to use an Email Class object method in that later loop -- and I'm pretty sure PHP doesn't know that the variables in the array happen to be Email Class objects! I wrote a toString method, and I want to call it in the loop. While I don't need to do this for the final version of this tool, I would like to find out what I'm missing. This is the class and the loop where I'm calling the method: include 'imap_email_interface.php'; class ImapEmail implements imap_email_interface { // Email data var $msgno; var $to; var $from; var $subject; var $body; var $attachment; // Email behavior /* PHP 4 ~ legacy constructor */ public function ImapEmail($message_number) { $this->__construct(); $this->msgno = $message_number; } /* PHP 5 Constructor */ public function __construct($message_number) { $this->msgno = $message_number; } public function send($send_to) { // Not Yet Needed! Seriously! } public function setHeaderDirectly($TO, $FROM, $SUBJECT) { $this->to = $TO; $this->from = $FROM; $this->subject = $SUBJECT; } public function setHeaderIndirectly($HEADER) { if (isset($HEADER->to[0]->personal)) $this->to = '"'.$HEADER->to[0]->personal.'", '.$HEADER->to[0]->mailbox.'@'.$HEADER->to[0]->host; else $this->to = $HEADER->to[0]->mailbox.'@'.$HEADER->to[0]->host; $this->from = '"'.$HEADER->from[0]->personal.'", '.$HEADER->from[0]->mailbox.'@'.$HEADER->from[0]->host; $this->subject = $HEADER->subject; } public function setBody($BODY) { $this->body = $BODY; } public function setAttachment($ATTCH) { $this->attachment = $ATTCH; } public function toString() { $str = '[TO]: ' . $this->to . '<br />' . '[FROM]: ' . $this->from . '<br />' . '[SUBJECT]: ' . $this->subject . '<br />'; $str .= '[Attachment]: '.$this->attachment.'<br />'; return $str; } } ?> The Loop: foreach ($orderFileEmails as $x) { $x->toString(); echo '<br /><br />'; } Any ideas?

    Read the article

  • Custom string class (C++)

    - by Sanctus2099
    Hey guys. I'm trying to write my own C++ String class for educational and need purposes. The first thing is that I don't know that much about operators and that's why I want to learn them. I started writing my class but when I run it it blocks the program but does not do any crash. Take a look at the following code please before reading further: class CString { private: char* cstr; public: CString(); CString(char* str); CString(CString& str); ~CString(); operator char*(); operator const char*(); CString operator+(const CString& q)const; CString operator=(const CString& q); }; First of all I'm not so sure I declared everything right. I tried googleing about it but all the tutorials about overloading explain the basic ideea which is very simple but lack to explain how and when each thing is called. For instance in my = operator the program calls CString(CString& str); but I have no ideea why. I have also attached the cpp file below: CString::CString() { cstr=0; } CString::CString(char *str) { cstr=new char[strlen(str)]; strcpy(cstr,str); } CString::CString(CString& q) { if(this==&q) return; cstr = new char[strlen(q.cstr)+1]; strcpy(cstr,q.cstr); } CString::~CString() { if(cstr) delete[] cstr; } CString::operator char*() { return cstr; } CString::operator const char* () { return cstr; } CString CString::operator +(const CString &q) const { CString s; s.cstr = new char[strlen(cstr)+strlen(q.cstr)+1]; strcpy(s.cstr,cstr); strcat(s.cstr,q.cstr); return s; } CString CString::operator =(const CString &q) { if(this!=&q) { if(cstr) delete[] cstr; cstr = new char[strlen(q.cstr)+1]; strcpy(cstr,q.cstr); } return *this; } For testing I used a code just as simple as this CString a = CString("Hello") + CString(" World"); printf(a); I tried debugging it but at a point I get lost. First it calls the constructor 2 times for "hello" and for " world". Then it get's in the + operator which is fine. Then it calls the constructor for the empty string. After that it get's into "CString(CString& str)" and now I'm lost. Why is this happening? After this I noticed my string containing "Hello World" is in the destructor (a few times in a row). Again I'm very puzzeled. After converting again from char* to Cstring and back and forth it stops. It never get's into the = operator but neither does it go further. printf(a) is never reached. I use VisualStudio 2010 for this but it's basically just standard c++ code and thus I don't think it should make that much of a difference

    Read the article

  • Open an X application going through many hoops (SSH, vpn etc)

    - by ??O?????
    The players: my home computer, running Linux with an X server running. (Call it HOME.) a remote site, to which I can connect over the internet using a VPN. (SITE) a Linux computer at the remote site, to which I can connect with ssh -X and nicely have X clients displaying on my local server. (MIDDLE) a very old Irix machine (an Onyx) at the remote site, which has no SSH server (therefore I can't ssh -X to it), only an ssh client. (ONYX) Purpose I need to run an X11 application on the ONYX machine, and see the GUI on HOME. I think I stumble upon xauth issues. So far The current situation is: ? HOME connects to SITE ? A vncserver starts on MIDDLE:7 ? vncviewer on HOME connects to vncserver on MIDDLE ? ONYX starts a forwarding ssh session to MIDDLE: ssh -TfN -L 6007:127.0.0.1:6007 MIDDLE ? DISPLAY=localhost:7 xclient on ONYX fails with Xlib: connection to "127.0.0.1:7.0" refused by server I do know that the forwarding (6007:127.0.0.1:6007) succeeds. A previous attempt was: ? HOME connects to SITE ? HOME connects to MIDDLE: ssh -X MIDDLE (xclock displays on HOME, DISPLAY is 127.0.0.1:10) ? ONYX starts an SSH tunnel to MIDDLE: ssh -TfN -L 6010:127.0.0.1:6010 MIDDLE ? DISPLAY=127.0.0.1:10 xclient fails with X connection to 127.0.0.1:10.0 broken (explicit kill or server shutdown). while an error pops up in the MIDDLE session: X11 connection rejected because of wrong authentication. Despair How can I achieve my purpose?

    Read the article

  • GDI+ & Delphi, PNG resource, DrawImage, ColorConversion -> Out of Memory

    - by Paul
    I have started to toy around with GDI+ in Delphi 2009. Among the things that I wanted to do was to load a PNG resource and apply a Color Conversion to it when drawing it to the Graphics object. I am using the code provided in http://www.bilsen.com/gdiplus/. To do that I just added a new constructor to TGPBitmap that uses the same code found in <www.codeproject.com>/KB/GDI-plus/cgdiplusbitmap.aspx (C++) or <www.masm32.com>/board/index.php?topic=10191.0 (MASM) converted to Delphi. For reference, the converted code is as follows: constructor TGPBitmap.Create(const Instance: HInst; const PngName: String; dummy : PngResource_t); const cPngType : string = 'PNG'; var hResource : HRSRC; imageSize : DWORD; pResourceData : Pointer; hBuffer : HGLOBAL; pBuffer : Pointer; pStream : IStream; begin inherited Create; hResource := FindResource(Instance, PWideChar(PngName), PWideChar(cPngType)); if hResource = 0 then Exit; imageSize := SizeofResource(Instance, hResource); if imageSize = 0 then Exit; pResourceData := LockResource(LoadResource(Instance, hResource)); if pResourceData = nil then Exit; hBuffer := GlobalAlloc(GMEM_MOVEABLE, imageSize); if hBuffer <> 0 then begin try pBuffer := GlobalLock(hBuffer); if pBuffer <> nil then begin try CopyMemory(pBuffer, pResourceData, imageSize); if CreateStreamOnHGlobal(hBuffer, FALSE, pStream) = S_OK then begin GdipCheck(GdipCreateBitmapFromStream(pStream, FNativeHandle)); end; finally GlobalUnlock(hBuffer); pStream := nil; end; end; finally GlobalFree(hBuffer); end; end; end; The code seems to work fine as I am able to draw the loaded image without any problems. However, if I try to apply a Color Conversion when drawing it, then I get a lovely error: (GDI+ Error) Out of Memory. If I load the bitmap from a file, or if I create a temporary to which I draw the initial bitmap and then use the temporary, then it works just fine. What bugs me is that if I take the C++ project from codeproject, add the same PNG as resource and use the same color conversion (in other words, do the exact same thing I am doing in Delphi in the same order and with the same function calls that happen to go to the same DLL), then it works. The C++ code looks like this: const Gdiplus::ColorMatrix cTrMatrix = { { {1.0, 0.0, 0.0, 0.0, 0.0}, {0.0, 1.0, 0.0, 0.0, 0.0}, {0.0, 0.0, 1.0, 0.0, 0.0}, {0.0, 0.0, 0.0, 0.5, 0.0}, {0.0, 0.0, 0.0, 0.0, 1.0} } }; Gdiplus::ImageAttributes imgAttrs; imgAttrs.SetColorMatrix(&cTrMatrix, Gdiplus::ColorMatrixFlagsDefault, Gdiplus::ColorAdjustTypeBitmap); graphics.DrawImage(*pBitmap, Gdiplus::Rect(0, 0, pBitmap->m_pBitmap->GetWidth(), pBitmap->m_pBitmap->GetHeight()), 0, 0, pBitmap->m_pBitmap->GetWidth(), pBitmap->m_pBitmap->GetHeight(), Gdiplus::UnitPixel, &imgAttrs); The Delphi counterpart is: const cTrMatrix: TGPColorMatrix = ( M: ((1.0, 0.0, 0.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 0.0, 0.5, 0.0), (0.0, 0.0, 0.0, 0.0, 1.0))); var lImgAttrTr : IGPImageAttributes; lBitmap : IGPBitmap; begin // ... lImgAttrTr := TGPImageAttributes.Create; lImgAttrTr.SetColorMatrix(cTrMatrix, ColorMatrixFlagsDefault, ColorAdjustTypeBitmap); aGraphics.DrawImage ( lBitmap, TGPRect.Create ( 0, 0, lBitmap.Width, lBitmap.Height ), 0, 0, lBitmap.Width, lBitmap.Height, UnitPixel, lImgAttrTr ); I am completely clueless as to what may be causing the issue, and Google has not been of any help. Any ideas, comments and explanations are highly appreciated.

    Read the article

  • Step by Step / Deep explain: The Power of (Co)Yoneda (preferably in scala) through Coroutines

    - by Mzk
    some background code /** FunctorStr: ? F[-]. (? A B. (A -> B) -> F[A] -> F[B]) */ trait FunctorStr[F[_]] { self => def map[A, B](f: A => B): F[A] => F[B] } trait Yoneda[F[_], A] { yo => def apply[B](f: A => B): F[B] def run: F[A] = yo(x => x) def map[B](f: A => B): Yoneda[F, B] = new Yoneda[F, B] { def apply[X](g: B => X) = yo(f andThen g) } } object Yoneda { implicit def yonedafunctor[F[_]]: FunctorStr[({ type l[x] = Yoneda[F, x] })#l] = new FunctorStr[({ type l[x] = Yoneda[F, x] })#l] { def map[A, B](f: A => B): Yoneda[F, A] => Yoneda[F, B] = _ map f } def apply[F[_]: FunctorStr, X](x: F[X]): Yoneda[F, X] = new Yoneda[F, X] { def apply[Y](f: X => Y) = Functor[F].map(f) apply x } } trait Coyoneda[F[_], A] { co => type I def fi: F[I] def k: I => A final def map[B](f: A => B): Coyoneda.Aux[F, B, I] = Coyoneda(fi)(f compose k) } object Coyoneda { type Aux[F[_], A, B] = Coyoneda[F, A] { type I = B } def apply[F[_], B, A](x: F[B])(f: B => A): Aux[F, A, B] = new Coyoneda[F, A] { type I = B val fi = x val k = f } implicit def coyonedaFunctor[F[_]]: FunctorStr[({ type l[x] = Coyoneda[F, x] })#l] = new CoyonedaFunctor[F] {} trait CoyonedaFunctor[F[_]] extends FunctorStr[({type l[x] = Coyoneda[F, x]})#l] { override def map[A, B](f: A => B): Coyoneda[F, A] => Coyoneda[F, B] = x => apply(x.fi)(f compose x.k) } def liftCoyoneda[T[_], A](x: T[A]): Coyoneda[T, A] = apply(x)(a => a) } Now I thought I understood yoneda and coyoneda a bit just from the types – i.e. that they quantify / abstract over map fixed in some type constructor F and some type a, to any type B returning F[B] or (Co)Yoneda[F, B]. Thus providing map fusion for free (? is this kind of like a cut rule for map ?). But I see that Coyoneda is a functor for any type constructor F regardless of F being a Functor, and that I don't fully grasp. Now I'm in a situation where I'm trying to define a Coroutine type, (I'm looking at https://www.fpcomplete.com/school/to-infinity-and-beyond/pick-of-the-week/coroutines-for-streaming/part-2-coroutines for the types to get started with) case class Coroutine[S[_], M[_], R](resume: M[CoroutineState[S, M, R]]) sealed trait CoroutineState[S[_], M[_], R] object CoroutineState { case class Run[S[_], M[_], R](x: S[Coroutine[S, M, R]]) extends CoroutineState[S, M, R] case class Done[R](x: R) extends CoroutineState[Nothing, Nothing, R] class CoroutineStateFunctor[S[_], M[_]](F: FunctorStr[S]) extends FunctorStr[({ type l[x] = CoroutineState[S, M, x]})#l] { override def map[A, B](f : A => B) : CoroutineState[S, M, A] => CoroutineState[S, M, B] = { ??? } } } and I think that if I understood Coyoneda better I could leverage it to make S & M type constructors functors way easy, plus I see Coyoneda potentially playing a role in defining recursion schemes as the functor requirement is pervasive. So how could I use coyoneda to make type constructors functors like for example coroutine state? or something like a Pause functor ?

    Read the article

  • What is the best practice when coding math class/functions ?

    - by Isaac Clarke
    Introductory note : I voluntarily chose a wide subject. You know that quote about learning a cat to fish, that's it. I don't need an answer to my question, I need an explanation and advice. I know you guys are good at this ;) Hi guys, I'm currently implementing some algorithms into an existing program. Long story short, I created a new class, "Adder". An Adder is a member of another class representing the physical object actually doing the calculus , which calls adder.calc() with its parameters (merely a list of objects to do the maths on). To do these maths, I need some parameters, which do not exist outside of the class (but can be set, see below). They're neither config parameters nor members of other classes. These parameters are D1 and D2, distances, and three arrays of fixed size : alpha, beta, delta. I know some of you are more comfortable reading code than reading text so here you go : class Adder { public: Adder(); virtual Adder::~Adder(); void set( float d1, float d2 ); void set( float d1, float d2, int alpha[N_MAX], int beta[N_MAX], int delta[N_MAX] ); // Snipped prototypes float calc( List& ... ); // ... inline float get_d1() { return d1_ ;}; inline float get_d2() { return d2_ ;}; private: float d1_; float d2_; int alpha_[N_MAX]; // A #define N_MAX is done elsewhere int beta_[N_MAX]; int delta_[N_MAX]; }; Since this object is used as a member of another class, it is declared in a *.h : private: Adder adder_; By doing that, I couldn't initialize the arrays (alpha/beta/delta) directly in the constructor ( int T[3] = { 1, 2, 3 }; ), without having to iterate throughout the three arrays. I thought of putting them in static const, but I don't think that's the proper way of solving such problems. My second guess was to use the constructor to initialize the arrays Adder::Adder() { int alpha[N_MAX] = { 0, -60, -120, 180, 120, 60 }; int beta[N_MAX] = { 0, 0, 0, 0, 0, 0 }; int delta[N_MAX] = { 0, 0, 180, 180, 180, 0 }; set( 2.5, 0, alpha, beta, delta ); } void Adder::set( float d1, float d2 ) { if (d1 > 0) d1_ = d1; if (d2 > 0) d2_ = d2; } void Adder::set( float d1, float d2, int alpha[N_MAX], int beta[N_MAX], int delta[N_MAX] ) { set( d1, d2 ); for (int i = 0; i < N_MAX; ++i) { alpha_[i] = alpha[i]; beta_[i] = beta[i]; delta_[i] = delta[i]; } } My question is : Would it be better to use another function - init() - which would initialize arrays ? Or is there a better way of doing that ? My bonus question is : Did you see some mistakes or bad practice along the way ?

    Read the article

  • Stronger laptop_mode in Linux

    - by Vi
    Can I have stronger laptop mode in Linux? I want to spin down the hard drive and prevent it to spin up even if something wants to read something not in cache. In general I want to have these modes: Normal Current laptop mode Stronger laptop mode: spin up only when needs to read something uncached (and cache it). No spinups to write something unless really memory pressure (Exception: explicit "sync" command in console). Kernel is allowed to keep processes in D-sleep for 10 seconds for that. Forced laptop mode: do not spin up, period. Keep offending processes in D-sleep unless I turn off this mode. Like there is a bomb instead of hard drive. I also want to have access times tracked (mount -o atime), but I don't want the hard drive to be spinned up only to update them. Is there some settings or kernel patches that can get closer to this? May be I should write special io scheduler for "forced laptop mode"? E.g. echo suspend > /sys/block/sda/queue/scheduler to lock the drive and echo cfq > /ys/block/sda/queue/scheduler to unlock it again?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >