Search Results

Search found 8830 results on 354 pages for 'enterprise deployment'.

Page 334/354 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Windows Server 2012 Migration (DNS/AD DS Standard Eval to Essentials OEM) P2V -> Do I need a Secondary Domain Controller during migration?

    - by Aubrey Robertson
    This is my first post on this exchange (although not my first on stack exchange), so please have patience. I am a 3rd year student intern, and I have been tasked with virtualizing the server systems at the company I work for. I have come a long way, and I am almost ready to install the VM Server in migration mode. Here is some information: Source Server: Windows Server 2012 Standard Evaluation DNS Server (local only) Advanced Directory Domain Services File and Storage stuff A few other server roles Destination Server: Windows Server 2012 Essentials OEM (Hyper-V client) Running under a temporary Hyper-V host (will migrate the Hyper-V host back to the old machine after the original server is virtualized as a client). Sitting currently at the "Select Installation Mode" screen. I have been following the guides on Microsoft tech net, and today I spent most of the day getting rid of issues in the Best Practices Analyser on the source machine. I have 3 remaining issues (which are all related): ERROR: DNS: DNS servers on Ethernet (adapter name) should include the loopback address, but not as the first entry (flavour text indicates that, during migration, the DNS server may not be found) WARNING: All domains should have at least two domain controllers for redundancy. WARNING: DNS: Ethernet should be configured to use both a preferred and an alternate DNS Server. All of these issues can be resolved by deploying a secondary domain controller, but I have never done that before (see my concerns below). The main issue here that I am concerned with for installing in migration mode is the FIRST one (the error). If I try and set-up the new server deployment, and the adapter domain controller is listed as localhost, then this may cause the installation to fail. (at least, this is what the Microsoft documentation suggests). But I do not have another IP address to enter here as I have no other local domain controllers. So I did the first obvious thing that came to my mind, and tried to use Google DNS servers as my alternates. That did not work because they couldn't recognize other computers in the "forest". Now I'm no expert when it comes to DNS, so please forgive my ignorance. This DNS server is concerned only with Active Directory stuffs for the local network. If I go ahead with migration, and it fails, then I will just have to go ahead and install a secondary DNS server I suppose. The problem I have here is that I am limited by the amount of Windows Server keys I have available (I have 2); however, I do have access to a Linux box running Debian Wheezy that I set-up two weeks ago as a Mantis server. I could install Windows Server 2012 as a secondary DNS (I think) in a VM and use that, but then it seems like I will be wasting time, and probably the Windows key too, and if there's another way to do it with Linux that would be much better. Even better still, do I even need a secondary DNS server for migration at all? The hints said that during migration the original machine "might" not be found. Thank you for your time and consideration.

    Read the article

  • An error occured synchronizing windows with time.windows.com

    - by Killrawr
    Okay so I've tried stopping/registering the win32tm service on this Windows Server 2008 Enterprise Computer. C:\Users\Administrator>net stop w32time The Windows Time service is stopping. The Windows Time service was stopped successfully. C:\Users\Administrator>w32tm /unregister The following error occurred: Access is denied. (0x80070005) C:\Users\Administrator>w32tm /unregister W32Time successfully unregistered. C:\Users\Administrator>w32tm /register W32Time successfully registered. C:\Users\Administrator>net start w32time The Windows Time service is starting. The Windows Time service was started successfully. (Source : http://social.technet.microsoft.com/Forums/en-US/winserverDS/thread/9bdfc2cc-4775-4435-8868-57d214e1e3ba/) And I get this error from the Date and Time, Internet Time tab (After also following the steps here). I've even tried the Atomic Time Clock Worldtimeserver and I get the error The following error occurred: The specified module could not be found. (0x8007007E). I've also disabled the Windows Firewall, that might of been blocking the synchronization. I've done a file scan with sfc /scannow that came back with no errors. C:\Users\Administrator>sfc /scannow Beginning system scan. This process will take some time. Beginning verification phase of system scan. Verification 100% complete. Windows Resource Protection did not find any integrity violations. C:\Users\Administrator> But I'm not having much luck. Is there anyway lo possibly solve this? or is the time.windows.com servers unsupported? because the software is from 2008? (I really don't know :/), My ping result to time.windows.com C:\Users\Administrator>ping time.windows.com Pinging time.microsoft.akadns.net [65.55.21.22] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 65.55.21.22: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), And tracert result C:\Users\Administratortracert time.windows.com Tracing route to time.microsoft.akadns.net [65.55.21.24] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms 192.168.1.1 2 32 ms 31 ms 32 ms be2-100.bras1wtc.wlg.vf.net.nz [203.109.129.113] 3 31 ms 32 ms 31 ms be5-100.ppnzwtc01.wlg.vf.net.nz.129.109.203.in-a ddr.arpa [203.109.129.114] 4 31 ms 31 ms 31 ms gi0-2-0-3.ppnzwtc01.wlg.vf.net.nz.180.109.203.in -addr.arpa [203.109.180.210] 5 31 ms 31 ms 30 ms gi0-2-0-3.ppnzwtc02.wlg.vf.net.nz [203.109.180.2 09] 6 167 ms 166 ms 166 ms ip-141.199.31.114.VOCUS.net.au [114.31.199.141] 7 175 ms 175 ms 175 ms microsoft.com.any2ix.coresite.com [206.223.143.1 43] 8 177 ms 180 ms 176 ms xe-7-0-2-0.by2-96c-1a.ntwk.msn.net [207.46.42.17 6] 9 205 ms 205 ms 204 ms xe-10-0-2-0.co1-96c-1b.ntwk.msn.net [207.46.45.3 1] 10 * * * Request timed out. 11 * * * Request timed out. 12 * * * Request timed out. 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 ^C And nslookup C:\Users\Administrator>nslookup time.windows.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: time.microsoft.akadns.net Address: 65.55.21.22 Aliases: time.windows.com

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Supermicro IPMI on MBD-X8DAH+-F-O motherboard. Keyboard and mouse do not work after booting Windows Server 2008 R2

    - by LDelgado
    Hell Everyone, I built a server with the mentioned motherboard. I installed Windows Server 2008 R2 Enterprise on this server. IPMI is integrated on the motherboard with its own dedicated NIC. I've got that NIC configured with its own IP address. I can remote into it using IPMI, and I can remotely control the server settings before booting the OS ( BIOS, RAID configuration, etc). When the OS boots, I lose the mouse and keyboard. I cannot use the keyboard or mouse when installing the OS either. So the Keyboard and Mouse only work when no OS is loaded. Once the OS loads I lose it - that is my problem. I've been doing some research and trying a few things, but I have not been successful in fixing this issue. I may be wrong, but based on the things I've found online, it seems that the problem could be caused by the way the OS handles USB. The server is headless. There is no keyboard, mouse, or monitor plugged into it. When I boot up the OS and remote into it, I cannot see a mouse or keyboard listed in the Device Manager. Based on what I've read, it seems that the OS should detect a mouse and a keyboard when connecting remotely via IPMI. The following are the solutions I've tried. Nothing has worked so far: I've updated the firmware of the IPMI component to the latest firmware - 1.33. I made sure that the mouse mode was set to Absolute (Windows OS). I've loaded the factory defaults several times. I've enabled Port64h/60h Emulation under the USB settings in the BIOS. I've disabled USB legacy support in the BIOS. I made sure the firewall wasn't blocking IPMI (disabled the firewall). And that's about it. I've found threads in some forums from people having the same issue as me, but they were not running the same OS. They were either running Linux or FreeBSD. Most of them fixed their problem by selecting the right mouse mode (Linux in their case). There was one other that solved the problem by disabling USB Mass Storage mode. He stated "When I set it to disable USB Mass Storage when no image is loaded, the ukbd came alive, and I'm typing this on the IPMI Console. " source: http://freebsd.1045724.n5.nabble.com/IPMI-Console-No-luck-once-OS-is-booted-td3967868.html I suspect the solution described in the previous paragraph is somehow related to my problem. I've found several threads on the internet with issues describing the same problem, but none of them were with Windows Server 2008 R2. Again, I may be wrong, but it seems like that could be the issue. I just don't know how I go about applying a solution in Windows Server 2008 R2. In any case, I could use your expertise. Maybe I am missing something, or maybe I'm on the right track. Your help is much appreciated. Thank you in advance,

    Read the article

  • Tomcat and ASP site under IIS6 with SSL

    - by Rafe
    I've been working on migrating our companies' website from it's original server to a new one and am having two different but possibly related problems. The box this is sitting on is a Windows 2003 server x64 running IIS 6. The Tomcat version is 5.5.x as it was the version the original deployment was built on. There are two other sites on the server one in plain HTML, another in PHP and the one I am trying to migrate is a combination of Java and ASP (the introductory/sign in pages being Java as well as many reports used for our clients and the administration pages being in ASP) First of all I can only access the site if I enter the ip followed by :8080 (xxx.xxx.xxx.xxx:8080). The original setup had an index.html file in the root of the site with a bit of javascript in the header that pointed the site to 'www.mysite.com/app/public' but if I try going directly to the site without the 8080 I get a 'page not found error' and the javascript redirector causes the same problem because it doesn't add the 8080 into the URL even though on the original site the 8080 wasn't present so I don't understand why it would need it now. The js redirect looks like this: <script language="JavaScript"> <!-- location.href = "/app/public/" location.replace("/app/public/"); //--> </script> When setting the site up I used the command line to unbind IIS from all of the ip's on the system (there are 12 ip's on this box) because I was led to believe Tomcat wanted to use localhost which wasn't accessible. I'm not sure if this was the right thing to do but I'm throwing it in for the sake of completeness. And actually, at this point trying to go to localhost from the server itself throws up a 'could not connect to localhost' error. If I go to localhost:8080 I get the tomcat welcome page. If I do localhost:8080/app/public I get the intro page to our website. So I'm not sure what I'm even looking at in this case, that is what the proper behavior should be. The second part of the problem is that if I do go to either the ip or localhost such as above (localhost:8080/app/public) and click on our login link it is supposed to transfer me to our login page yet instead I receive a 'could not connect' error and the url has changed to localhost:8443/app/secure. From my research I see that port 8443 is Tomcats SSL port and the server.xml alludes to it as follows: <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> I have an SSL certificate assigned to the site via IIS and was under the impression that by default Tomcat allowed IIS to handle secure connections but apparently something is munged because it's not working. There is another section in the server.xml that reads like this: <Connector port="8009" enableLookups="false" redirectPort="443" protocol="AJP/1.3" /> Which I'm not sure what it is for although port 443 is the SSL port that IIS uses so I'm confused as to what this is supposed to be doing. Another question I have is when does the isap_redirector actually come into play? How does it know when to try and serve pages through Tomcat and when not to? I've hunted around the 'net for an answer and have yet to find a clear dialogue on the subject. Anyone have any pointers as to where to look for a solution to all of this?

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • What networking hardware do I need in this situation (Fairpoint [ISP] "E-DIA" connection)?

    - by Tegeril
    Right away you'd probably want to say, "Well just ask Fairpoint." I've done that, a number of times in as many different ways I can phrase it and just keep hitting a brick wall where they will not commit to giving any useful information and instead recommend contracting an outside firm and spending a pile of money. Anyway... I'm trying to help a family member out with an office connection that is being setup. I've managed to scrape tiny details here and there from our discussions with the ISP (Fairpoint in Maine) about what is going to be done and what is going to be needed. This is the connection that is being setup: http://www.fairpoint.com/enterprise/vantagepoint/e-dia/index.jsp Information I have been given: Via this connection I can get IPs across different C blocks if that were necessary (it is not) Fairpoint is bringing hardware with them that they claim simply does the conversion from whatever line is coming in the building to ethernet, they have referred to this as the "Fairpoint Netvanta" which I know suggests a line of products that I have looked up, but some (most? all?) of those seems to handle all the routing that I saw. Fairpoint says that I need to bring my own router to sit behind their device. They have literally declined to even suggest products that have worked for other clients in the past and fall back on "any business router works, not a home router." That alone makes my head spin. Detail and clarity hit a brick wall from there. At one moment I got them to cough up that the router I provide needs to be able to do VPN tunneling but they typically fall back to "not a home router" and I was even given "just a business router, Cisco or something, it'll be $500-$1000". Now I know that VPN tunneling routers exist well below that price point and since this connection is going to one machine, possibly two only via ethernet, my desire to purchase networking hardware that over-delivers what I need is not very high. They are literally setting all this up, have provided no configuration details for after they finish, and expect me to just plunk a $500+ router behind it and cross my fingers or contract out to a third party company. If there were other options available for the location, I would have dropped them in a second, but there aren't. The device that is connected requires a static IP and I'm honestly a bit hazy on the necessity of an additional router behind their device and generally a bit over my head. I presume that the router needs to be able to serve external static IPs to its clients, but I really don't know what is going to show up when they come to do the install. This was originally going to be run via an ADSL bridge modem with a range of static IPs (which is easy and is currently setup properly) but the location is too far from the telco to get speeds that we really want for upload and this is also a connection that needs high availability. Any suggestions would be greatly appreciated (I see a number of options in the Cisco Small Business line and other competitors that aren't going to break the bank…), especially if you've worked with Fairpoint before! Thanks for reading my wall of text.

    Read the article

  • MDB listening a Topic in JBoss 5.1

    - by fool
    Hi, I have a topic configured correctly like this, in jboss 5.1: <mbean code="org.jboss.jms.server.destination.TopicService" name="jboss.messaging.destination:service=Topic,name=GreetingsTopic" xmbean-dd="xmdesc/Topic-xmbean.xml"> <depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer </depends> <depends>jboss.messaging:service=PostOffice</depends> </mbean> I can see the topic being started in JBoss logs: 23:08:40,990 INFO [TopicService] Topic[/topic/GreetingsTopic] started, fullSize=200000, pageSize=2000, downCacheSize=2000 And I have this MDB: @MessageDriven(activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jmx.Topic"), @ActivationConfigProperty(propertyName = "destination", propertyValue = "topic/GreetingsTopic") }) public class GreetingsClientWebMDB implements MessageListener { ... ... } When I deploy the MDB, I'm getting a strange error: 3:09:30,781 ERROR [JmsActivation] Unable to reconnect org.jboss.resource.adapter.jms.inflow.JmsActivationSpec@1a1d638(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter@1da5b5b destination=topic/GreetingsTopic destinationType=javax.jmx.Topic tx=true durable=false reconnect=10 provider=java:/DefaultJMSProvider user=null maxMessages=1 minSession=1 maxSession=15 keepAlive=60000 useDLQ=true DLQHandler=org.jboss.resource.adapter.jms.inflow.dlq.GenericDLQHandler DLQJndiName=queue/DLQ DLQUser=null DLQMaxResent=5) java.lang.ClassCastException: Object at 'topic/GreetingsTopic' in context {java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory, java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces:org.jboss.naming:org.jnp.interfaces} is not an instance of [class=javax.jms.Queue classloader=BaseClassLoader@91e143{vfsfile:/usr/local/jboss-5.1.0.GA/server/default/conf/jboss-service.xml} interfaces={interface=javax.jms.Destination classloader=BaseClassLoader@91e143{vfsfile:/usr/local/jboss-5.1.0.GA/server/default/conf/jboss-service.xml}}] object class is [class=org.jboss.jms.destination.JBossTopic classloader=BaseClassLoader@91e143{vfsfile:/usr/local/jboss-5.1.0.GA/server/default/conf/jboss-service.xml} interfaces={interface=javax.jms.Topic classloader=BaseClassLoader@91e143{vfsfile:/usr/local/jboss-5.1.0.GA/server/default/conf/jboss-service.xml}}] at org.jboss.util.naming.Util.checkObject(Util.java:338) at org.jboss.util.naming.Util.lookup(Util.java:223) at org.jboss.resource.adapter.jms.inflow.JmsActivation.setupDestination(JmsActivation.java:464) at org.jboss.resource.adapter.jms.inflow.JmsActivation.setup(JmsActivation.java:352) at org.jboss.resource.adapter.jms.inflow.JmsActivation.handleFailure(JmsActivation.java:292) at org.jboss.resource.adapter.jms.inflow.JmsActivation$SetupActivation.run(JmsActivation.java:733) at org.jboss.resource.work.WorkWrapper.execute(WorkWrapper.java:205) at org.jboss.util.threadpool.BasicTaskWrapper.run(BasicTaskWrapper.java:260) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) I also try to use a deployment descriptor instead of annotations but with the same result. I really can't see the problem :(

    Read the article

  • Unable to run Wix Custom Action in MSI

    - by Grandpappy
    I'm trying to create a custom action for my Wix install, and it's just not working, and I'm unsure why. Here's the bit in the appropriate Wix File: <Binary Id="INSTALLERHELPER" SourceFile=".\Lib\InstallerHelper.dll" /> <CustomAction Id="HelperAction" BinaryKey="INSTALLERHELPER" DllEntry="CustomAction1" Execute="immediate" /> Here's the full class file for my custom action: using Microsoft.Deployment.WindowsInstaller; namespace InstallerHelper { public class CustomActions { [CustomAction] public static ActionResult CustomAction1(Session session) { session.Log("Begin CustomAction1"); return ActionResult.Success; } } } The action is run by a button press in the UI (for now): <Control Id="Next" Type="PushButton" X="248" Y="243" Width="56" Height="17" Default="yes" Text="!(loc.WixUINext)" > <Publish Event="DoAction" Value="HelperAction">1</Publish> </Control> When I run the MSI, I get this error in the log: MSI (c) (08:5C) [10:08:36:978]: Connected to service for CA interface. MSI (c) (08:4C) [10:08:37:030]: Note: 1: 1723 2: SQLHelperAction 3: CustomAction1 4: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor. Action SQLHelperAction, entry: CustomAction1, library: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp MSI (c) (08:4C) [10:08:38:501]: Product: SessionWorks :: Judge Edition -- Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor. Action SQLHelperAction, entry: CustomAction1, library: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp Action ended 10:08:38: SQLHelperAction. Return value 3. DEBUG: Error 2896: Executing action SQLHelperAction failed. The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2896. The arguments are: SQLHelperAction, , Neither of the two error codes or messages it gives me is enough to tell me what's wrong. Or perhaps I'm just not understanding what they're saying is wrong. At first I thought it might be because I was using Wix 3.5, so just to be sure I tried using Wix 3.0, but I get the same error. Any ideas on what I'm doing wrong?

    Read the article

  • .Net Crystal Report printing application running on termianal service connection errors when session

    - by MrEdmundo
    I have created a .Net application to run on an App Server that gets requests for a report and prints out the requested report. The C# application uses Crystal Reports to load the report and subsequently print it out. The application is run on Server which is connected to via a Remote Desktop connection under a particular user account (required for old apps). When I disconnect from the Remote Session the application starts raising exceptions such as: Message: CrystalDecisions.Shared.CrystalReportsException: Load report failed This type of error is never raised when the Remote Session is active. The server running the app is running Windows Server 2003, my box which creates the connection is Windows XP. I appreciate this is fairly weird, however I cannot see any problem with the application deployment I have created. Does anyone know what could be cause this issue? EDIT: I bit the bullet and created the application as a windows service, obviously this doesn't take long I just wasn't convinced it would solve the problem. Anyway it doesn't!!! I have also tried removing the multi-thread code that was calling the print function asynchronously. I did this in order to simply the app and narrow down the reason it could fail. Anyway, this didn't improve the situation either! EDIT: The two errors I get are: System.Runtime.InteropServices.COMException (0x80000201): Invalid printer specified. at CrystalDecisions.ReportAppServer.Controllers.PrintOutputControllerClass.ModifyPrinterName(String newVal) at CrystalDecisions.CrystalReports.Engine.PrintOptions.set_PrinterName(String value) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) The printer isn't invalid, this is confirmed when 60 seconds later the time ticks and the report is printed successfully. And The request could not be submitted for background processing. at CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.GetLastPageNumber(RequestContext pRequestContext) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) --- End of inner exception stack trace --- at CrystalDecisions.ReportAppServer.ConvertDotNetToErom.ThrowDotNetException(Exception e) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) at CrystalDecisions.CrystalReports.Engine.FormatEngine.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at CrystalDecisions.CrystalReports.Engine.ReportDocument.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) EDIT: I ran filemon to check if there were any access issue. At the point when the error occurs file mon reports Request: OPEN | Path: C:\windows\assembly\gac_msil\system\2.0.0.0__b77a5c561934e089\ws2_32.dll | Result: NOT FOUND | Other: Attributes Error

    Read the article

  • Application Architecture using WCF and System.AddIn

    - by Silverhalide
    A little background -- we're designing an application that uses a client/server architecture consisting of: A server which loads server-side modules, potentially developed by other teams. A client which loads corresponding client-side modules (also potentially developed by those other teams; each client module corresponds with a server module). The client side communicates with the server side for general coordination, and as well as module specific tasks. (At this point, I think that means client talks to server, client modules talk to server modules.) Environment is .NET 3.5, and client side is WPF. The deployment scenario introduces the potential to upgrade the server, any server-side module, the client, and any client-side module independently. However, being able to "work" using mismatched versions is required. I'm therefore concerned about versioning issues. My thinking so far: A Windows Service for the server. Using System.AddIn for the server to load and communicate with the server modules will give us the greatest flexibility in terms of version compatability between server and server modules. The server and each server module vend WCF services for communication to the client side; communication between the server and a server module, or between two server modules use the AddIn contracts. (One advantage of this is that a module can expose a different interface within the server and outside it.) Similarly, the client uses System.AddIn to find, load, and communicate with the client modules. Client communications with client modules is via the AddIn interface; communications from the client and from client modules to the server side are via WCF. For maximum resilience, each module will run in a separate app-domain. In general, the system has modest performance requirements, so marshalling and crossing process boundaries is not expected to be a performance concern. (Performance requirement is basically summed up by: don't get in the way of the other parts of the system not described here.) My questions are around the idea of having two different communication and versioning models to work with which will be an added burden on our developers. System.AddIn seems quite powerful, but also a little unwieldly. (I'm also unsure of Microsoft's commitment to it in the future.) On the other hand, I'm not thrilled with WCF's versioning capabilities. I have a feeling that it would be possible to implement the System.AddIn view/adapter/contract system within WCF, but being fairly new to both technologies, I would have no idea of where to start. So... Am I on the right track here? Am I doing this the hard way? Are there gotchas I need to be aware of on this road? Thanks.

    Read the article

  • WCF - Windows authentication - Security settings require Anonymous...

    - by Rashack
    Hi, I am struggling hard with getting WCF service running on IIS on our server. After deployment I end up with an error message: Security settings for this service require 'Anonymous' Authentication but it is not enabled for the IIS application that hosts this service. I want to use Windows authentication and thus I have Anonymous access disabled. Also note that there is aspNetCompatibilityEnabled (if that makes any difference). Here's my web.config: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <bindings> <webHttpBinding> <binding name="default"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" proxyCredentialType="Windows"/> </security> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="AspNetAjaxBehavior"> <enableWebScript /> <webHttp /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="defaultServiceBehavior"> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceAuthorization principalPermissionMode="UseWindowsGroups" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="xxx.Web.Services.RequestService" behaviorConfiguration="defaultServiceBehavior"> <endpoint behaviorConfiguration="AspNetAjaxBehavior" binding="webHttpBinding" contract="xxx.Web.Services.IRequestService" bindingConfiguration="default"> </endpoint> <endpoint address="mex" binding="mexHttpBinding" name="mex" contract="IMetadataExchange"></endpoint> </service> </services> </system.serviceModel> I have searched all over the internet with no luck. Any clues are greatly appreciated.

    Read the article

  • How can I load style resources from a dynamically loaded Silverlight application (XAP)?

    - by Tom
    I've followed Tim Heuer's video for dynamically loading other XAP's (into a 'master' Silverlight application), as well as some other links to tweak the loading of resources and am stuck on the particular issue of loading style resources from within the dynamically loaded XAP (i.e. the contents of Assets\Styles.xaml). When I run the master/hosting applcation, it successfully streams the dynamic XAP and I can read the deployment info etc. and load the assembly parts. However, when I actuall try to create an instance of a form from the Dynamic XAP, it fails with Cannot find a Resource with the Name/Key LayoutRootGridStyle which is in it's Assets\Styles.xaml file (it works if I run it directly so I know it's OK). For some reason these don't show up as application resources - not sure if I've totally got the wrong end of the stick, or am just missing something? Code snippet below (apologies it's a bit messy - just trying to get it working first) ... '' # Here's the code that reads the dynamic XAP from the web server ... '' #... wCli = New WebClient AddHandler wCli.OpenReadCompleted, AddressOf OpenXAPCompleted wCli.OpenReadAsync(New Uri("MyTest.xap", UriKind.Relative)) '' #... '' #Here's the sub that's called when openread is completed '' #... Private Sub OpenXAPCompleted(ByVal sender As Object, ByVal e As System.Net.OpenReadCompletedEventArgs) Dim sManifest As String = New StreamReader(Application.GetResourceStream(New StreamResourceInfo(e.Result, Nothing), New Uri("AppManifest.xaml", UriKind.Relative)).Stream).ReadToEnd Dim deploymentRoot As XElement = XDocument.Parse(sManifest).Root Dim deploymentParts As List(Of XElement) = _ (From assemblyParts In deploymentRoot.Elements().Elements() Select assemblyParts).ToList() Dim oAssembly As Assembly = Nothing For Each xElement As XElement In deploymentParts Dim asmPart As AssemblyPart = New AssemblyPart() Dim source As String = xElement.Attribute("Source").Value Dim sInfo As StreamResourceInfo = Application.GetResourceStream(New StreamResourceInfo(e.Result, "application/binary"), New Uri(source, UriKind.Relative)) If source = "MyTest.dll" Then oAssembly = asmPart.Load(sInfo.Stream) Else asmPart.Load(sInfo.Stream) End If Next Dim t As Type() = oAssembly.GetTypes() Dim AppClass = (From parts In t Where parts.FullName.EndsWith(".App") Select parts).SingleOrDefault() Dim mykeys As Array If Not AppClass Is Nothing Then Dim a As Application = DirectCast(oAssembly.CreateInstance(AppClass.FullName), Application) For Each strKey As String In a.Resources.Keys If Not Application.Current.Resources.Contains(strKey) Then Application.Current.Resources.Add(strKey, a.Resources(strKey)) End If Next End If Dim objectType As Type = oAssembly.GetType("MyTest.MainPage") Dim ouiel = Activator.CreateInstance(objectType) Dim myData As UIElement = DirectCast(ouiel, UIElement) Me.splMain.Children.Add(myData) Me.splMain.UpdateLayout() End Sub '' #... '' # And here's the line that fails with "Cannot find a Resource with the Name/Key LayoutRootGridStyle" '' # ... System.Windows.Application.LoadComponent(Me, New System.Uri("/MyTest;component/MainPage.xaml", System.UriKind.Relative)) '' #... any thoughts?

    Read the article

  • Why is code quality not popular?

    - by Peter Kofler
    I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.) Code quality does not seem popular. Some examples from my experience include Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests. Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored. Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it propably looks different for conferences on agile topics, as unit testing and CI and such has a higer value there.) So why is code quality so unpopular/considered boring? EDIT: Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

    Read the article

  • Managed (.net) library with html-tidy like functionality?

    - by Eamon Nerbonne
    Does anybody know of an html cleaner for .NET that can parse html and (for instance) convert it to a more machine friendly format such as xhtml? I've tried the HTML Agility Pack, but that fails to correctly parse even fairly simple examples. To give an example of html that should be parsed correctly: <html><body> <ul><li>TestElem1 <li>TestElem2 <li>TestElem3 List: <ul><li>Nested1 <li>Nested2</li> <li>Nested3 </ul> <li>TestElem4 </ul> <p>paragraph 1 <p>paragraph 2 <p>paragraph 3 </body></html> li tags don't need to be closed (see spec), and neither do P tags. In other words, the above sample should be parsed as: <html><body> <ul><li>TestElem1</li> <li>TestElem2</li> <li>TestElem3 List: <ul><li>Nested1</li> <li>Nested2</li> <li>Nested3</li> </ul></li> <li>TestElem4</li> </ul> <p>paragraph 1</p> <p>paragraph 2</p> <p>paragraph 3</p> </body></html> Since the aim is to use the library on various machines, it's a big disadvantage to need to fall back to native code (such as a wrapper around html tidy) which would require extra deployment hassle and sacrifice platform independance. Any suggestions? To recap, I'm looking for: An html cleaner ala HTML tidy Must be able to deal with real world html, not just xhtml, at the very least correctly reading valid html 4 Must be able to convert to a more easily processable xml format Should be a purely managed app.

    Read the article

  • C# Monte Carlo Incremental Risk Calculation optimisation, random numbers, parallel execution

    - by m3ntat
    My current task is to optimise a Monte Carlo Simulation that calculates Capital Adequacy figures by region for a set of Obligors. It is running about 10 x too slow for where it will need to be in production and number or daily runs required. Additionally the granularity of the result figures will need to be improved down to desk possibly book level at some stage, the code I've been given is basically a prototype which is used by business units in a semi production capacity. The application is currently single threaded so I'll need to make it multi-threaded, may look at System.Threading.ThreadPool or the Microsoft Parallel Extensions library but I'm constrained to .NET 2 on the server at this bank so I may have to consider this guy's port, http://www.codeproject.com/KB/cs/aforge_parallel.aspx. I am trying my best to get them to upgrade to .NET 3.5 SP1 but it's a major exercise in an organisation of this size and might not be possible in my contract time frames. I've profiled the application using the trial of dotTrace (http://www.jetbrains.com/profiler). What other good profilers exist? Free ones? A lot of the execution time is spent generating uniform random numbers and then translating this to a normally distributed random number. They are using a C# Mersenne twister implementation. I am not sure where they got it or if it's the best way to go about this (or best implementation) to generate the uniform random numbers. Then this is translated to a normally distributed version for use in the calculation (I haven't delved into the translation code yet). Also what is the experience using the following? http://quantlib.org http://www.qlnet.org (C# port of quantlib) or http://www.boost.org Any alternatives you know of? I'm a C# developer so would prefer C#, but a wrapper to C++ shouldn't be a problem, should it? Maybe even faster leveraging the C++ implementations. I am thinking some of these libraries will have the fastest method to directly generate normally distributed random numbers, without the translation step. Also they may have some other functions that will be helpful in the subsequent calculations. Also the computer this is on is a quad core Opteron 275, 8 GB memory but Windows Server 2003 Enterprise 32 bit. Should I advise them to upgrade to a 64 bit OS? Any links to articles supporting this decision would really be appreciated. Anyway, any advice and help you may have is really appreciated.

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • What Is The Best Database For Delphi Desktop Applications That Supports Stored Procedures?

    - by Cape Cod Gunny
    I started with Turbo Pascal 3, went to TP5, Bought TP6 called Borland the next day and downgraded to TP5.5. Bought Delphi 3, and now have Delphi 5 Enterprise. I sort of lost interest in writing code about 4-5 years ago for two reasons; Spent all day writing ASP & SQL for someone else. PC Techniques magazine went away. I've got a few programs in the shareware market that are solid performers but are in need of serious updating. I love Delphi or did when it was Borland (before Borland bought DBase and all the other crap), I'd like to salvage as much of my D5E code as possible but I doubt I can. I plan on upgrading to Delphi 2010. My next software release needs to interact with a database. I'm very proficient with MS Sql and like to put all of the database code in stored procedures. What is the best database choice that interacts well with Delphi, allows stored procedures and is so easy to deploy that even the Geico gecko could deploy it? 10/25/2009 18:53 PM EST Re-Opened After Reading Install Docs for Delphi 2010 I downloaded a trial version of Delphi 2010 and unzipped the install. I've been reading the install docs included in the package. I started with the install.htm inside the zip package. install.htm wisely tells you to see the following two articles: Installation Notes: http://edn.embarcadero.com/article/39754 Release Notes: http://edn.embarcadero.com/article/39758 the release notes state the following... MSSQL driver requires the installation of the SQL Native Client. SQL Native Client 2008 is required for dbxmss.dll. SQL Native Client 2005 is required for dbxmss9.dll I checked my machine to see if SQL Native Client is installed. Nope. I wasn't done reading the docs so I made a note to install SQL Native Client. I googled dbxmss.dll and dbxmss9.dll and found a very interesting thread on the Embarcadero forums. read thread here. After reading this thread and some careful thought I don't think I will be using Microsoft SQL Express. I can't rely on my customers having the right drivers installed. So, I'm back to looking for a different solution. If I'm selling a $40 product to the general masses I need to have a bulletproof solution that doesn't require my brand new customer to update their machine before my software will work.

    Read the article

  • How to get GWT 2.0 and Restlet 2.0 to play nice

    - by Holograham
    Hello, I am having trouble getting Restlet to play nice with GWT in the same project. I have been trying the examples from Restlets website to no avail I am using Eclipse, Maven2 plugin, GWT, and Restlet GWT. I have never used server side code in this GWT project before and I know there is some custom setup involved. I am deploying locally for the time being using the built in Jetty in GWT Hosted Mode. I can get my front end to display but my back end is not executing. I did add this to my web.xml file which is from the example (for the time being I am trying to drop the example into my project). <servlet> <servlet-name>adapter</servlet-name> <servlet-class>org.restlet.ext.gwt.GwtShellServletWrapper</servlet-class> <init-param> <param-name>org.restlet.application</param-name> <param-value>org.restlet.example.gwt.server.TestServerApplication</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>adapter</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> I did notice that my web.xml file is located separately from my war deployment directory. My project dir structure is setup as follows. Project Root src/main/java src/main/resources src/main/test JRE Maven Dependecies GWT SDK src main webapp WEB-INF web.xml target war <my project dir> WEB-INF lib pom.xml So there is no web.xml under my war files WEB-INF directory. I am new to this type of application so it is most likely a matter of me not understanding the directory structure and how my GWT project is compiling into these dirs. Any help is appreciated! If I need to provide any more info let me know. Thanks

    Read the article

  • Java web-app debugging not possible due to direct undeploy on debug

    - by Hendrik
    When i try to debug my webapp it starts up the tomcat server and the application, but shuts down the debugger shortly before the app gets usable. I see the debugging toolbar for a second before it vanishes again, though the app keeps running. Tomcat-log: Listening for transport dt_socket at address: 11555 23.03.2010 01:24:35 org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: .:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java 23.03.2010 01:24:35 org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8084 23.03.2010 01:24:35 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 847 ms 23.03.2010 01:24:35 org.apache.catalina.core.StandardService start INFO: Starting service Catalina 23.03.2010 01:24:35 org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 log4j:WARN No appenders could be found for logger (org.springframework.web.context.ContextLoader). log4j:WARN Please initialize the log4j system properly. 23.03.2010 01:24:41 org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8084 23.03.2010 01:24:41 org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 23.03.2010 01:24:41 org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/78 config=null 23.03.2010 01:24:41 org.apache.catalina.startup.Catalina start INFO: Server startup in 5855 ms 23.03.2010 01:24:42 org.apache.catalina.startup.HostConfig checkResources INFO: Undeploying context [] 23.03.2010 01:24:45 org.apache.catalina.core.StandardContext start INFO: Container org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/] has already been started Debugging log: Attached JPDA debugger to localhost:11555 Checking data source definitions for missing JDBC drivers... Deploying JDBC driver to /Applications/NetBeans/apache-tomcat-6.0.20/lib/mysql-connector-java-5.1.6-bin.jar Stopping Tomcat process... Waiting for Tomcat... Tomcat server stopped. Starting Tomcat process... Waiting for Tomcat... Tomcat server started. Undeploying ... OK - Undeployed application at context path / In-place deployment at /path/to/project/dir/build/web deploy?config=file%3A%2Fvar%2Ffolders%2FZP%2FZPbqxGrbHFaUlXzAfgWV1%2B%2B%2B%2BTQ%2F-Tmp-%2Fcontext734173871283203218.xml&path=/ OK - Deployed application at context path / start?path=/ Start is in progress... OK - Started application at context path / debug-display-browser: Browsing: http://localhost:8084/ connect-client-debugger: BUILD SUCCESSFUL (total time: 18 seconds) System is Netbeans 6.8 on MacOS 10.6.2. Thank you very much, guys!

    Read the article

  • Workflows not starting after fresh install

    - by Greg McGuffey
    I just installed Dynamics CRM 4.0. It is working nicely except for workflows. They won't start. I turned on tracing and it appears that there is an IO error. The server is setup with IFD and SSL. No issues accessing it internally or externally. Here is the trace: # CRM Tracing Version 2.0 # LocalTime: 2010-06-08 11:34:58.2 # Categories: # CallStackOn: No # ComputerName: FOX-CRM1 # CRMVersion: 4.0.7333.2741 # DeploymentType: OnPremise # ScaleGroup: # ServerRole: AppServer, AsyncService, DiscoveryService, WebService, ApiServer, HelpServer, DeploymentService [2010-06-08 11:34:58.2] Process:CrmAsyncService |Organization:821a137e-7191-49a4-86cc-69101e2b6d20 |Thread: 24 |Category: Platform.Async |User: 00000000-0000-0000-0000-000000000000 |Level: Error | AsyncOperationCommand.Execute >Exception while trying to execute AsyncOperationId: {DF68F483-2C73-DF11-9A34-18A9053B7B38} AsyncOperationType: 1 - System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. ---> System.IO.IOException: The handshake failed due to an unexpected packet format. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Microsoft.Crm.SdkTypeProxy.CrmService.Retrieve(String entityName, Guid id, ColumnSetBase columnSet) at Microsoft.Crm.Asynchronous.SdkTypeProxyCrmServiceWrapper.Retrieve(String entityName, Guid id, ColumnSetBase columnSet) at Microsoft.Crm.Asynchronous.SdkPluginDescriptionProvider.GetPluginTypeDescription(Guid pluginTypeId, IOrganizationContext context) at Microsoft.Crm.Caching.PluginTypeCacheLoader.LoadCacheData(Guid key, IOrganizationContext context) at Microsoft.Crm.Caching.CrmMultiOrgCache`2.CreateEntry(TKey key, IOrganizationContext context) at Microsoft.Crm.Caching.CrmSharedMultiOrgCache`2.LookupEntry(TKey key, IOrganizationContext context) at Microsoft.Crm.Caching.PluginTypeCache.LookupEntry(Guid pluginTypeId, IOrganizationContext context) at Microsoft.Crm.Asynchronous.AsyncOperationCommand.GetPluginType(Guid pluginTypeId) at Microsoft.Crm.Asynchronous.EventOperation.InternalExecute(AsyncEvent asyncEvent) at Microsoft.Crm.Asynchronous.AsyncOperationCommand.Execute(AsyncEvent asyncEvent) The only thing I've tried to to update the AsyncSdkRootDomain row in the Deployment table to match the ADSdkRootDomain and the ADApplicationRootDomain values. It was blank. That didn't appear to work. After some more research, I think this might be caused because the Asynch service can't access the SDK web services using SSL. If this is correct, how would one configure a CRM server for secure access, internal and external (IFD) and still allow asynch service to hit web site? Thanks for your help!

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • Custom validator not invoked when using Validation Application Block through configuration

    - by Chris
    I have set up a ruleset in my configuration file which has two validators, one of which is a built-in NotNullValidator, the other of which is a custom validator. The problem is that I see the NotNullValidator hit, but not my custom validator. The custom validator is being used to validate an Entity Framework entity object. I have used the debugger to confirm the NotNull is hit (I forced a failure condition so I saw it set an invalid result), but it never steps into the custom one. I am using MVC as the web app, so I defined the ruleset in a config file at that layer, but my custom validator is defined in another project. However, I wouldn't have thought that to be a problem because when I use the Enterprise Library Configuration tool inside Visual Studio 2008 it is able to set the type properly for the custom validator. As well, I believe the custom validator is fine as it builds ok, and the config tool can reference it properly. Does anybody have any ideas what the problem could be, or even what to do/try to debug further? Here is a stripped down version of my custom validator: [ConfigurationElementType(typeof(CustomValidatorData))] public sealed class UserAccountValidator : Validator { public UserAccountValidator(NameValueCollection attributes) : base(string.Empty, "User Account") { } protected override string DefaultMessageTemplate { get { throw new NotImplementedException(); } } protected override void DoValidate(object objectToValidate, object currentTarget, string key, ValidationResults results) { if (!currentTarget.GetType().Equals(typeof(UserAccount))) { throw new Exception(); } UserAccount userAccountToValidate = (UserAccount)currentTarget; // snipped code ... this.LogValidationResult(results, "The User Account is invalid", currentTarget, key); } } Here is the XML of my ruleset in Validation.config (the NotNull rule is only there to force a failure so I could see it getting hit, and it does): <validation> <type defaultRuleset="default" assemblyName="MyProj.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="MyProj.Entities.UserAccount"> <ruleset name="default"> <properties> <property name="HashedPassword"> <validator negated="true" messageTemplate="" messageTemplateResourceName="" messageTemplateResourceType="" tag="" type="Microsoft.Practices.EnterpriseLibrary.Validation.Validators.NotNullValidator, Microsoft.Practices.EnterpriseLibrary.Validation, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Not Null Validator" /> </property> <property name="Property"> <validator messageTemplate="" messageTemplateResourceName="" messageTemplateResourceType="" tag="" type="MyProj.Entities.UserAccountValidator, MyProj.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Custom Validator" /> </property> </properties> </ruleset> </type> </validation> And here is the stripped down version of the way I invoke the validation: var type = entity.GetType() var validator = ValidationFactory.CreateValidator(type, "default", new FileConfigurationSource("Validation.config")) var results = validator.Validate(entity) Any advice would be much appreciated! Thanks, Chris

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >