Search Results

Search found 17857 results on 715 pages for 'oracle enterprise visualization'.

Page 700/715 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • Why would a PCI scan fail because of components that are not even installed?

    - by Brandon
    Recently a PCI scan was run against a web server and the result was a failure. Some of the issues could be fixed, however others simply make no sense to me. The machine was a clean install, there are only two things running, the .NET 3.5 website and the dotDefender web application firewall. However there are several errors similar to: Web server vulnerability Impact: /servlet/SessionServlet: JRun or Netware WebSphere default servlet found. All default code should be removed from servers. Risk Factor: Medium/ CVSS2 Base Score: 6.4 CVE: CVE-2000-0539 I'm not sure what this is, but I can't find anything on the server that looks anything like this. Web server vulnerability Impact: /some.php?=PHPE9568F35- D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests that contain specific QUERY strings. Risk Factor: Medium/ CVSS2 Base Score: 5.0 PHP is not installed. Trying to add that query string to any page does nothing because the application ignores it. And doing that phpVersion check results in a 404. Similar to this, there are dozens of errors related to JSP and Oracle that are also not installed. Web server vulnerability Impact: /admin/database/wwForum.mdb: Web Wiz Forums pre 7.5 is vulnerable to Cross-Site Scripting attacks. Default login/pass is Administrator/letmein Risk Factor: Medium/ CVSS2 Base Score: 4.0 There are several errors like this, telling me that Web Wiz Forums, Alan Ward A-Cart 2.0, IlohaMail, etc. are all vulnerable. These are not installed or referenced anywhere I can find. There are even references to pages that simply don't exist, like OpenAutoClassifieds. Can anyone point me in the right direction as to why these errors are showing up or where I might look to find these components if they are in fact installed? Note: This website and server are for a subdomain of the main website. The main website runs on a server that is running Apache/PHP, but I don't have access to that server. The report says the subdomain was the site being scanned, but is it possible for it to have scanned the main site as well?

    Read the article

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • An error occured synchronizing windows with time.windows.com

    - by Killrawr
    Okay so I've tried stopping/registering the win32tm service on this Windows Server 2008 Enterprise Computer. C:\Users\Administrator>net stop w32time The Windows Time service is stopping. The Windows Time service was stopped successfully. C:\Users\Administrator>w32tm /unregister The following error occurred: Access is denied. (0x80070005) C:\Users\Administrator>w32tm /unregister W32Time successfully unregistered. C:\Users\Administrator>w32tm /register W32Time successfully registered. C:\Users\Administrator>net start w32time The Windows Time service is starting. The Windows Time service was started successfully. (Source : http://social.technet.microsoft.com/Forums/en-US/winserverDS/thread/9bdfc2cc-4775-4435-8868-57d214e1e3ba/) And I get this error from the Date and Time, Internet Time tab (After also following the steps here). I've even tried the Atomic Time Clock Worldtimeserver and I get the error The following error occurred: The specified module could not be found. (0x8007007E). I've also disabled the Windows Firewall, that might of been blocking the synchronization. I've done a file scan with sfc /scannow that came back with no errors. C:\Users\Administrator>sfc /scannow Beginning system scan. This process will take some time. Beginning verification phase of system scan. Verification 100% complete. Windows Resource Protection did not find any integrity violations. C:\Users\Administrator> But I'm not having much luck. Is there anyway lo possibly solve this? or is the time.windows.com servers unsupported? because the software is from 2008? (I really don't know :/), My ping result to time.windows.com C:\Users\Administrator>ping time.windows.com Pinging time.microsoft.akadns.net [65.55.21.22] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 65.55.21.22: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), And tracert result C:\Users\Administratortracert time.windows.com Tracing route to time.microsoft.akadns.net [65.55.21.24] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms 192.168.1.1 2 32 ms 31 ms 32 ms be2-100.bras1wtc.wlg.vf.net.nz [203.109.129.113] 3 31 ms 32 ms 31 ms be5-100.ppnzwtc01.wlg.vf.net.nz.129.109.203.in-a ddr.arpa [203.109.129.114] 4 31 ms 31 ms 31 ms gi0-2-0-3.ppnzwtc01.wlg.vf.net.nz.180.109.203.in -addr.arpa [203.109.180.210] 5 31 ms 31 ms 30 ms gi0-2-0-3.ppnzwtc02.wlg.vf.net.nz [203.109.180.2 09] 6 167 ms 166 ms 166 ms ip-141.199.31.114.VOCUS.net.au [114.31.199.141] 7 175 ms 175 ms 175 ms microsoft.com.any2ix.coresite.com [206.223.143.1 43] 8 177 ms 180 ms 176 ms xe-7-0-2-0.by2-96c-1a.ntwk.msn.net [207.46.42.17 6] 9 205 ms 205 ms 204 ms xe-10-0-2-0.co1-96c-1b.ntwk.msn.net [207.46.45.3 1] 10 * * * Request timed out. 11 * * * Request timed out. 12 * * * Request timed out. 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 ^C And nslookup C:\Users\Administrator>nslookup time.windows.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: time.microsoft.akadns.net Address: 65.55.21.22 Aliases: time.windows.com

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Supermicro IPMI on MBD-X8DAH+-F-O motherboard. Keyboard and mouse do not work after booting Windows Server 2008 R2

    - by LDelgado
    Hell Everyone, I built a server with the mentioned motherboard. I installed Windows Server 2008 R2 Enterprise on this server. IPMI is integrated on the motherboard with its own dedicated NIC. I've got that NIC configured with its own IP address. I can remote into it using IPMI, and I can remotely control the server settings before booting the OS ( BIOS, RAID configuration, etc). When the OS boots, I lose the mouse and keyboard. I cannot use the keyboard or mouse when installing the OS either. So the Keyboard and Mouse only work when no OS is loaded. Once the OS loads I lose it - that is my problem. I've been doing some research and trying a few things, but I have not been successful in fixing this issue. I may be wrong, but based on the things I've found online, it seems that the problem could be caused by the way the OS handles USB. The server is headless. There is no keyboard, mouse, or monitor plugged into it. When I boot up the OS and remote into it, I cannot see a mouse or keyboard listed in the Device Manager. Based on what I've read, it seems that the OS should detect a mouse and a keyboard when connecting remotely via IPMI. The following are the solutions I've tried. Nothing has worked so far: I've updated the firmware of the IPMI component to the latest firmware - 1.33. I made sure that the mouse mode was set to Absolute (Windows OS). I've loaded the factory defaults several times. I've enabled Port64h/60h Emulation under the USB settings in the BIOS. I've disabled USB legacy support in the BIOS. I made sure the firewall wasn't blocking IPMI (disabled the firewall). And that's about it. I've found threads in some forums from people having the same issue as me, but they were not running the same OS. They were either running Linux or FreeBSD. Most of them fixed their problem by selecting the right mouse mode (Linux in their case). There was one other that solved the problem by disabling USB Mass Storage mode. He stated "When I set it to disable USB Mass Storage when no image is loaded, the ukbd came alive, and I'm typing this on the IPMI Console. " source: http://freebsd.1045724.n5.nabble.com/IPMI-Console-No-luck-once-OS-is-booted-td3967868.html I suspect the solution described in the previous paragraph is somehow related to my problem. I've found several threads on the internet with issues describing the same problem, but none of them were with Windows Server 2008 R2. Again, I may be wrong, but it seems like that could be the issue. I just don't know how I go about applying a solution in Windows Server 2008 R2. In any case, I could use your expertise. Maybe I am missing something, or maybe I'm on the right track. Your help is much appreciated. Thank you in advance,

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

  • What networking hardware do I need in this situation (Fairpoint [ISP] "E-DIA" connection)?

    - by Tegeril
    Right away you'd probably want to say, "Well just ask Fairpoint." I've done that, a number of times in as many different ways I can phrase it and just keep hitting a brick wall where they will not commit to giving any useful information and instead recommend contracting an outside firm and spending a pile of money. Anyway... I'm trying to help a family member out with an office connection that is being setup. I've managed to scrape tiny details here and there from our discussions with the ISP (Fairpoint in Maine) about what is going to be done and what is going to be needed. This is the connection that is being setup: http://www.fairpoint.com/enterprise/vantagepoint/e-dia/index.jsp Information I have been given: Via this connection I can get IPs across different C blocks if that were necessary (it is not) Fairpoint is bringing hardware with them that they claim simply does the conversion from whatever line is coming in the building to ethernet, they have referred to this as the "Fairpoint Netvanta" which I know suggests a line of products that I have looked up, but some (most? all?) of those seems to handle all the routing that I saw. Fairpoint says that I need to bring my own router to sit behind their device. They have literally declined to even suggest products that have worked for other clients in the past and fall back on "any business router works, not a home router." That alone makes my head spin. Detail and clarity hit a brick wall from there. At one moment I got them to cough up that the router I provide needs to be able to do VPN tunneling but they typically fall back to "not a home router" and I was even given "just a business router, Cisco or something, it'll be $500-$1000". Now I know that VPN tunneling routers exist well below that price point and since this connection is going to one machine, possibly two only via ethernet, my desire to purchase networking hardware that over-delivers what I need is not very high. They are literally setting all this up, have provided no configuration details for after they finish, and expect me to just plunk a $500+ router behind it and cross my fingers or contract out to a third party company. If there were other options available for the location, I would have dropped them in a second, but there aren't. The device that is connected requires a static IP and I'm honestly a bit hazy on the necessity of an additional router behind their device and generally a bit over my head. I presume that the router needs to be able to serve external static IPs to its clients, but I really don't know what is going to show up when they come to do the install. This was originally going to be run via an ADSL bridge modem with a range of static IPs (which is easy and is currently setup properly) but the location is too far from the telco to get speeds that we really want for upload and this is also a connection that needs high availability. Any suggestions would be greatly appreciated (I see a number of options in the Cisco Small Business line and other competitors that aren't going to break the bank…), especially if you've worked with Fairpoint before! Thanks for reading my wall of text.

    Read the article

  • JBoss 6 deployment of message-driven bean error

    - by AntonioP
    Hello, I have an java EE application which has one message-driven bean and it runs fine on JBoss 4, however when I configure the project for JBoss 6 and deploy on it, I get this error; WARN [org.jboss.ejb.deployers.EjbDeployer.verifier] EJB spec violation: ... The message driven bean must declare one onMessage() method. ... org.jboss.deployers.spi.DeploymentException: Verification of Enterprise Beans failed, see above for error messages. But my bean HAS the onMessage method! It would not have worked on jboss 4 either then. Why do I get this error!? Edit: The class in question looks like this package ... imports ... public class MyMDB implements MessageDrivenBean, MessageListener { AnotherSessionBean a; OneMoreSessionBean b; public MyMDB() {} public void onMessage(Message message) { if (message instanceof TextMessage) { try { //Lookup sessionBeans by jndi, create them lookupABean(); // check message-type, then invokie a.handle(message); // else b.handle(message); } catch (SomeException e) { //handling it } } } public void lookupABean() { try { // code to lookup session beans and create. } catch (CreateException e) { // handling it and catching NamingException too } } } Edit 2: And this is the jboss.xml relevant parts <message-driven> <ejb-name>MyMDB</ejb-name> <destination-jndi-name>topic/A_Topic</destination-jndi-name> <local-jndi-name>A_Topic</local-jndi-name> <mdb-user>user</mdb-user> <mdb-passwd>pass</mdb-passwd> <mdb-client-id>MyMessageBean</mdb-client-id> <mdb-subscription-id>subid</mdb-subscription-id> <resource-ref> <res-ref-name>jms/TopicFactory</res-ref-name> <jndi-name>jms/TopicFactory</jndi-name> </resource-ref> </message-driven>

    Read the article

  • Why is code quality not popular?

    - by Peter Kofler
    I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.) Code quality does not seem popular. Some examples from my experience include Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests. Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored. Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it propably looks different for conferences on agile topics, as unit testing and CI and such has a higer value there.) So why is code quality so unpopular/considered boring? EDIT: Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

    Read the article

  • C# Monte Carlo Incremental Risk Calculation optimisation, random numbers, parallel execution

    - by m3ntat
    My current task is to optimise a Monte Carlo Simulation that calculates Capital Adequacy figures by region for a set of Obligors. It is running about 10 x too slow for where it will need to be in production and number or daily runs required. Additionally the granularity of the result figures will need to be improved down to desk possibly book level at some stage, the code I've been given is basically a prototype which is used by business units in a semi production capacity. The application is currently single threaded so I'll need to make it multi-threaded, may look at System.Threading.ThreadPool or the Microsoft Parallel Extensions library but I'm constrained to .NET 2 on the server at this bank so I may have to consider this guy's port, http://www.codeproject.com/KB/cs/aforge_parallel.aspx. I am trying my best to get them to upgrade to .NET 3.5 SP1 but it's a major exercise in an organisation of this size and might not be possible in my contract time frames. I've profiled the application using the trial of dotTrace (http://www.jetbrains.com/profiler). What other good profilers exist? Free ones? A lot of the execution time is spent generating uniform random numbers and then translating this to a normally distributed random number. They are using a C# Mersenne twister implementation. I am not sure where they got it or if it's the best way to go about this (or best implementation) to generate the uniform random numbers. Then this is translated to a normally distributed version for use in the calculation (I haven't delved into the translation code yet). Also what is the experience using the following? http://quantlib.org http://www.qlnet.org (C# port of quantlib) or http://www.boost.org Any alternatives you know of? I'm a C# developer so would prefer C#, but a wrapper to C++ shouldn't be a problem, should it? Maybe even faster leveraging the C++ implementations. I am thinking some of these libraries will have the fastest method to directly generate normally distributed random numbers, without the translation step. Also they may have some other functions that will be helpful in the subsequent calculations. Also the computer this is on is a quad core Opteron 275, 8 GB memory but Windows Server 2003 Enterprise 32 bit. Should I advise them to upgrade to a 64 bit OS? Any links to articles supporting this decision would really be appreciated. Anyway, any advice and help you may have is really appreciated.

    Read the article

  • What Is The Best Database For Delphi Desktop Applications That Supports Stored Procedures?

    - by Cape Cod Gunny
    I started with Turbo Pascal 3, went to TP5, Bought TP6 called Borland the next day and downgraded to TP5.5. Bought Delphi 3, and now have Delphi 5 Enterprise. I sort of lost interest in writing code about 4-5 years ago for two reasons; Spent all day writing ASP & SQL for someone else. PC Techniques magazine went away. I've got a few programs in the shareware market that are solid performers but are in need of serious updating. I love Delphi or did when it was Borland (before Borland bought DBase and all the other crap), I'd like to salvage as much of my D5E code as possible but I doubt I can. I plan on upgrading to Delphi 2010. My next software release needs to interact with a database. I'm very proficient with MS Sql and like to put all of the database code in stored procedures. What is the best database choice that interacts well with Delphi, allows stored procedures and is so easy to deploy that even the Geico gecko could deploy it? 10/25/2009 18:53 PM EST Re-Opened After Reading Install Docs for Delphi 2010 I downloaded a trial version of Delphi 2010 and unzipped the install. I've been reading the install docs included in the package. I started with the install.htm inside the zip package. install.htm wisely tells you to see the following two articles: Installation Notes: http://edn.embarcadero.com/article/39754 Release Notes: http://edn.embarcadero.com/article/39758 the release notes state the following... MSSQL driver requires the installation of the SQL Native Client. SQL Native Client 2008 is required for dbxmss.dll. SQL Native Client 2005 is required for dbxmss9.dll I checked my machine to see if SQL Native Client is installed. Nope. I wasn't done reading the docs so I made a note to install SQL Native Client. I googled dbxmss.dll and dbxmss9.dll and found a very interesting thread on the Embarcadero forums. read thread here. After reading this thread and some careful thought I don't think I will be using Microsoft SQL Express. I can't rely on my customers having the right drivers installed. So, I'm back to looking for a different solution. If I'm selling a $40 product to the general masses I need to have a bulletproof solution that doesn't require my brand new customer to update their machine before my software will work.

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • Custom validator not invoked when using Validation Application Block through configuration

    - by Chris
    I have set up a ruleset in my configuration file which has two validators, one of which is a built-in NotNullValidator, the other of which is a custom validator. The problem is that I see the NotNullValidator hit, but not my custom validator. The custom validator is being used to validate an Entity Framework entity object. I have used the debugger to confirm the NotNull is hit (I forced a failure condition so I saw it set an invalid result), but it never steps into the custom one. I am using MVC as the web app, so I defined the ruleset in a config file at that layer, but my custom validator is defined in another project. However, I wouldn't have thought that to be a problem because when I use the Enterprise Library Configuration tool inside Visual Studio 2008 it is able to set the type properly for the custom validator. As well, I believe the custom validator is fine as it builds ok, and the config tool can reference it properly. Does anybody have any ideas what the problem could be, or even what to do/try to debug further? Here is a stripped down version of my custom validator: [ConfigurationElementType(typeof(CustomValidatorData))] public sealed class UserAccountValidator : Validator { public UserAccountValidator(NameValueCollection attributes) : base(string.Empty, "User Account") { } protected override string DefaultMessageTemplate { get { throw new NotImplementedException(); } } protected override void DoValidate(object objectToValidate, object currentTarget, string key, ValidationResults results) { if (!currentTarget.GetType().Equals(typeof(UserAccount))) { throw new Exception(); } UserAccount userAccountToValidate = (UserAccount)currentTarget; // snipped code ... this.LogValidationResult(results, "The User Account is invalid", currentTarget, key); } } Here is the XML of my ruleset in Validation.config (the NotNull rule is only there to force a failure so I could see it getting hit, and it does): <validation> <type defaultRuleset="default" assemblyName="MyProj.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="MyProj.Entities.UserAccount"> <ruleset name="default"> <properties> <property name="HashedPassword"> <validator negated="true" messageTemplate="" messageTemplateResourceName="" messageTemplateResourceType="" tag="" type="Microsoft.Practices.EnterpriseLibrary.Validation.Validators.NotNullValidator, Microsoft.Practices.EnterpriseLibrary.Validation, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Not Null Validator" /> </property> <property name="Property"> <validator messageTemplate="" messageTemplateResourceName="" messageTemplateResourceType="" tag="" type="MyProj.Entities.UserAccountValidator, MyProj.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Custom Validator" /> </property> </properties> </ruleset> </type> </validation> And here is the stripped down version of the way I invoke the validation: var type = entity.GetType() var validator = ValidationFactory.CreateValidator(type, "default", new FileConfigurationSource("Validation.config")) var results = validator.Validate(entity) Any advice would be much appreciated! Thanks, Chris

    Read the article

  • Is One Tool or a Suite of Tools Better for Scrum?

    - by Rob Wells
    G'day, Edit: We've been using Scrum very successfully for several years on several projects of varying sizes. In fact, our team developed the successful iPlayer project for the BBC using a classical Scrum approach. After using various combinations of tools, some high-tech, some low-tech, across these projects we now wish to try adopting a suitable tool suite. Our manager is to some extent attempting to force the adoption of a single suite of tools for Scrum. I've looked at the SO question "Best Scrum tools" and most people seem to recommend either: a suite of low-tech solutions, e.g. whiteboards, post-its, index cards, etc., or a monolithic tool that tries to satisfy as much as possible of the process, e.g. Agilo, Mingle, ScrumWorks, Target Process, etc. Our team is currently evaluating several different Scrum tools. However, we are looking at selecting a single, monolithic tool, e.g. Agilo. All of the "one-stop" solutions have their strengths and weaknesses with the serious enterprise type solutions being the best sort of fit. But all have some short comings. After reading the paper "Peer Code Review: An Agile Process" over at SmartBear I started wondering if we were trying to force adoption of a tool on a "best fit" basis. I think you can take a couple of reference artefacts of the Scrum development process, say user stories, epics and themes, and the code base which must use a well-known SCM, e.g. SVN, Hg, etc. Then if we take that as the common reference points for the tools employed then we would be able to use a group of tools to handle the different aspects of the Scrum process rather than try forcing a fit of a single tool would is a bit like forcing a square peg into the round hole. In this way, providing you've agreed your common reference points, you can use several tools, each performing their role better than a could be done by a single component in a monolithic tool suite. Is this a more sensible approach? Are the two reference points I mentioned above suitable, or is their a better choice of points where the tools would meet? cheers,

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • Turning off ASP.Net WebForms authentication for one sub-directory

    - by Keith
    I have a large enterprise application containing both WebForms and MVC pages. It has existing authentication and authorisation settings that I don't want to change. The WebForms authentication is configured in the web.config: <authentication mode="Forms"> <forms blah... blah... blah /> </authentication> <authorization> <deny users="?" /> </authorization> Fairly standard so far. I have a REST service that is part of this big application and I want to use HTTP authentication instead for this one service. So, when a user attempts to get JSON data from the REST service it returns an HTTP 401 status and a WWW-Authenticate header. If they respond with a correctly formed HTTP Authorization response it lets them in. The problem is that WebForms overrides this at a low level - if you return 401 (Unauthorised) it overrides that with a 302 (redirection to login page). That's fine in the browser but useless for a REST service. I want to turn off the authentication setting in the web.config: <location path="rest"> <system.web> <authentication mode="None" /> <authorization><allow users="?" /></authorization> </system.web> </location> The authorisation bit works fine, but when I try to change the authentication I get an exception: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. I'm configuring this at application level though - it's in the root web.config How do I override the authentication so that all of the rest of the site uses WebForms authentication and this one directory uses none? This is similar to another question: 401 response code for json requests with ASP.NET MVC, but I'm not looking for the same solution - I don't want to just remove the WebForms authentication and add new custom code globally, there's far to much risk and work involved. I want to change just the one directory in configuration.

    Read the article

  • Database Logon Error in crystal report

    - by yasar arafat
    am really very dumb as this problem is being answered so many times but eally its above my head. i dont have any idea about working with crystal reports or asp.net but due to some reason i have to work on it. after trying very hard i m able to design on crystal report connecting it with mysql database. now when i see in the crystalreport preview i am able to see the report and the data ![enter image description here][1] but when i try to run it on server it says database logon failed![enter image description here][2] now i really dont know what to do i am pasting my aspx and aspx.vb code here and please repley in english not computer language believe me i m super dumb enter code here aspx codes <%@ Page Title="" Language="VB" MasterPageFile="~/Master/Site.master"AutoEventWireup="false" CodeFile="ManHourCostExpenditure.aspx.vb" Inherits="ManHourCostExpenditure" %> <%@ Register assembly="CrystalDecisions.Web, Version=13.0.2000.0, Culture=neutral, PublicKeyToken=692fbea5521e1304" namespace="CrystalDecisions.Web" tagprefix="CR" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" Runat="Server"> <CR:CrystalReportViewer ID="CrystalReportViewer1" runat="server" AutoDataBind="True" EnableDatabaseLogonPrompt="False" EnableParameterPrompt="False" GroupTreeImagesFolderUrl="" Height="1202px" ReportSourceID="CrystalReportSource1" ToolbarImagesFolderUrl="" ToolPanelView="None" ToolPanelWidth="200px" Width="903px" BorderColor="#660033" BorderStyle="Solid" BorderWidth="1px" HasCrystalLogo="False" /> <CR:CrystalReportSource ID="CrystalReportSource1" runat="server"> <Report FileName="CrDiscProjManHoursActVsEst.rpt"> </Report> </CR:CrystalReportSource> </asp:Content> aspxdotvb codes Imports CrystalDecisions.CrystalReports.Engine Imports CrystalDecisions.Shared Imports CrystalDecisions.Enterprise Imports CrystalDecisions.ReportSource Imports CrystalDecisions.Web Imports CrystalDecisions.Windows.Forms Imports MySql Imports MySql.Data Imports MySql.Data.MySqlClient Imports System.Collections Imports System.Collections.Generic Imports System.Text Imports CrystalDecisions Imports CrystalDecisions.CrystalReports Imports CrystalReportsReportDefModelLib Partial Class ManHourCostExpenditure Inherits System.Web.UI.Page Dim CrRpt As New CrystalDecisions.CrystalReports.Engine.ReportDocument Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim StrConn As String = "SERVER=localhost;DATABASE=fluor;UID=root; PWD=root;" Dim Conn As New MySqlConnection(StrConn) Conn.Open() Dim CrRpt As New CrystalDecisions.CrystalReports.Engine.ReportDocument 'Session.Add("REPORT_KEY", CrRpt) CrRpt.Load(Server.MapPath("CrManHourCostExpenditure.rpt")) CrRpt.PrintOptions.PaperOrientation = PaperOrientation.Landscape 'If IsPostBack Then ' CrRpt = CType(Session.Item("REPORT_KEY"), Group) ' CrystalReportViewer1.ReportSource = CrRpt 'End If CrystalReportViewer1.ReportSource = CrRpt 'CrystalReportViewer1.DataBind() CrystalReportViewer1.RefreshReport() Conn.Close() 'CrRpt.Close() 'CrRpt.Dispose() End Sub Protected Sub Page_Unload(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Unload 'CrystalReportViewer1.ReportSource.Close() CrRpt.Close() CrRpt.Dispose() End Sub 'Protected Sub CrystalReportViewer1_Unload(ByVal sender As Object, ByVal e As System.EventArgs) Handles CrystalReportViewer1.Unload ' CrRpt.Close() ' CrRpt.Dispose() 'End Sub End Class please help me. thanks in advance.... yasir

    Read the article

  • How hard programming is? Really. [closed]

    - by Bubba88
    Hi! The question is about your perception of programming activity. How hard/exacting this task is? There is much buzz about programming nowadays, people say that programmers are smart, very technical and abstract at a time, know much about world, psychology etc.. They say, that programmers got really powerful brain thing, cause there is much to keep in consideration simultaneously again with much information folded into each other associatively (up 10 levels of folding they say))) Still, there are some terms to specify at our own.. So that is the question: What do you think about programming in general? Is it hard? Is it 'for everyone' or for the particular kind of people only? How much non-CS background do you need to program (just to program, really; enterprise applications for example)? How long is the learning curve? (again, for programming in general) And another bunch of random questions: - If you were not to like/love programming, would that be a serious trouble bothering your current employment? - If you were to start from the beginning, would you chose that direction this time? - What other areas (jobs or maybe hobbies) are comparable to programming in the way they can explode someone's lovely brain? - Is 'non turing-complete programming' (SQL, XML, etc.) comparable to what we do or is it really way easier, less requiring, cheap and akin to cooking :)? Well, the essence is: How would you describe programming activity WRT to its difficulty? Or, on the other hand: Did you ever catch yourself thinking at some point: OMG, it's sooo hard! I don't know how would I ever program, even carried away this way and doing programming just for fun? It's very interesting to know your opinion, your'e the programmers after all. I mean much people must be exaggerating/speculating about the thing they do not really know about. But that musn't be the case here on SO :) P.S.: I'll try my best to update this post later, and you please edit it too. At least I'll get decent English in my question text :)

    Read the article

  • Login function runs different between local and server

    - by quangnd
    Here is my check login function: protected bool checkLoginStatus(String email, String password) { bool loginStatus = false; bool status = false; try { Connector.openConn(); String str = "SELECT * FROM [User]"; SqlCommand cmd = new SqlCommand(str, Connector.conn); SqlDataAdapter da = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); da.Fill(ds, "tblUser"); //check valid foreach (DataRow dr in ds.Tables[0].Rows) { if (email == dr["Email"].ToString() && password == Connector.base64Decode(dr["Password"].ToString())) { Session["login_status"] = true; Session["username"] = dr["Name"].ToString(); Session["userId"] = dr["UserId"].ToString(); status = true; break; } } } catch (Exception ex) { } finally { Connector.closeConn(); } return status; } And call it at my aspx page: String email = Login1.UserName.Trim(); String password = Login1.Password.Trim(); if (checkLoginStatus(email, password)) Response.Redirect(homeSite); else lblFailure.Text = "Invalid!"; I ran this page at localhost successful! When I published it to server, this function only can run if email and password correct! Other, error occured: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) I tried open SQL Server 2008 Configuration Manager and enable SQL Server Browser service (Logon as:NT Authority/Local Service) but it stills error. (note: here is connection string of openConn() at Localhost (run on SQLEXpress 2005) connectionString="Data Source=MYLAPTOP\SQLEXPRESS;Initial Catalog=Spider_Vcms;Integrated Security=True" /> ) At server (run on SQL Server Enterprise 2008) connectionString="Data Source=SVR;Initial Catalog=Spider_Vcms;User Id=abc;password=123456;" /> anyone have an answer for my problem :( thanks a lot!

    Read the article

  • Loading jar file using JCL(JarClassLoader ) : classpath in manifest is ignored ..

    - by Xinus
    I am trying to load jar file using JCL using following code FileInputStream fis = new FileInputStream(new File( "C:\\Users\\sunils\\glassfish-tests\\working\\test.jar") ); JarClassLoader jc = new JarClassLoader( ); jc.add(fis); Class main = jc.loadClass( "highmark.test.Main" ); String[] str={}; main.getMethod("test").invoke(null);//.getDeclaredMethod("main",String[].class).invoke(null,str); fis.close(); But when I try to run this program I get Exception as Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at Main.main(Main.java:21) Caused by: java.lang.RuntimeException: Embedded startup not found, classpath is probably incomplete at org.glassfish.api.embedded.Server.<init>(Server.java:292) at org.glassfish.api.embedded.Server.<init>(Server.java:75) at org.glassfish.api.embedded.Server$Builder.build(Server.java:185) at org.glassfish.api.embedded.Server$Builder.build(Server.java:167) at highmark.test.Main.test(Main.java:33) ... 5 more According to this it is not able to locate class, But when I run the jar file explicitly it runs fine. It seems like JCL is ignoring other classes present in the jar file, MANIFEST.MF file in jar file shows: Manifest-Version: 1.0 Class-Path: . Main-Class: highmark.test.Main It seems to be ignoring Class-Path: . , This jar file runs fine when I run it using Java explicitly, This is just a test, in reality this jar file is coming as a InputStream and it cannot be stored in filesystem, How can I overcome this problem , Is there any workaround ? Thanks for any help . UNDATE: Here is a jar Main class : package highmark.test; import org.glassfish.api.embedded.*; import java.io.*; import org.glassfish.api.deployment.*; import com.sun.enterprise.universal.io.FileUtils; public class Main { public static void main(String[] args) throws IOException, LifecycleException, ClassNotFoundException { test(); } public static void test() throws IOException, LifecycleException, ClassNotFoundException{ Server.Builder builder = new Server.Builder("test"); Server server = builder.build(); server.createPort(8080); ContainerBuilder containerBuilder = server.createConfig(ContainerBuilder.Type.web); server.addContainer(containerBuilder); server.start(); File war=new File("C:\\Users\\sunils\\maventests\\simple-webapp\\target\\simple-webapp.war");//(File) inputStream.readObject(); EmbeddedDeployer deployer = server.getDeployer(); DeployCommandParameters params = new DeployCommandParameters(); params.contextroot = "simple"; deployer.deploy(war, params); } }

    Read the article

  • CDI SessionScoped Bean results in two instances in same session

    - by Ryan
    I've got two instances of a SessionScoped CDI bean for the same session. I was under the impression that there would be one instance generated for me by CDI, but it generated two. Am I misunderstanding how CDI works, or did I find a bug? Here is the bean code: package org.mycompany.myproject.session; import java.io.Serializable; import javax.enterprise.context.SessionScoped; import javax.faces.context.FacesContext; import javax.inject.Named; import javax.servlet.http.HttpSession; @Named @SessionScoped public class MyBean implements Serializable { private String myField = null; public MyBean() { System.out.println("MyBean constructor called"); FacesContext fc = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession)fc.getExternalContext().getSession(false); String sessionId = session.getId(); System.out.println("Session ID: " + sessionId); } public String getMyField() { return myField; } public void setMyField(String myField) { this.myField = myField; } } Here is the Facelet code: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core"> <f:view contentType="text/html" encoding="UTF-8"> <h:head> <title>Test</title> </h:head> <h:body> <h:form id="form"> <h:inputText value="#{myBean.myField}"/> <h:commandButton value="Submit"/> </h:form> </h:body> </f:view> </html> Here is the output from deployment and navigating to page: INFO: Loading application org.mycompany_myproject_war_1.0-SNAPSHOT at /myproject INFO: org.mycompany_myproject_war_1.0-SNAPSHOT was successfully deployed in 8,237 milliseconds. INFO: MyBean constructor called INFO: Session ID: 175355b0e10fe1d0778238bf4634 INFO: MyBean constructor called INFO: Session ID: 175355b0e10fe1d0778238bf4634 Using GlassFish 3.0.1

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • no such file to load -- rails (MissingSourceFile)... say what?!!

    - by Julian
    Hello, I'm having an obnoxious and weird problem while trying to include the ThinkingTank gem into my rails project. When I include gem 'thinkingtank' in my project's Gemfile I get the following error: ~/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require': no such file to load -- rails (MissingSourceFile) from ~/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/thinkingtank-0.0.5/lib/thinkingtank.rb:1 from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in `each' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in `each' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in `require' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/bundler-1.0.7/lib/bundler.rb:112:in `require' from ~/git/myproject/config/boot.rb:121:in `load_environment' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/rails-2.3.5/lib/initializer.rb:137:in `process' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from ~/.rvm/gems/ree-1.8.7-2010.01/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from ~/git/myproject/config/environment.rb:9 from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:254:in `require' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:254:in `load_modules' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:252:in `each' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:252:in `load_modules' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb/init.rb:21:in `setup' from ~/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/irb.rb:54:in `start' from ~/.rvm/rubies/ree-1.8.7-2010.01/bin/irb:17 The output from ruby -v is: ruby 1.8.7 (2009-12-24 patchlevel 248) [i686-darwin10.6.0], MBARI 0x6770, Ruby Enterprise Edition 2010.01 And the output from rails -v is: Rails 2.3.5 I've followed the basic guidelines from their documentation and from similar SA questions. But none of issues have the rails gem going missing.. And yes, we are including rails in our Gemfile =) Thank you in advance.

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >