Search Results

Search found 99443 results on 3978 pages for 'build server'.

Page 455/3978 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • How to manage maintenance/bug-fix branches in Subversion when setup projects need to be built?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • SSIS: Deploying OLAP cubes using C# script tasks and AMO

    - by DrJohn
    As part of the continuing series on Building dynamic OLAP data marts on-the-fly, this blog entry will focus on how to automate the deployment of OLAP cubes using SQL Server Integration Services (SSIS) and Analysis Services Management Objects (AMO). OLAP cube deployment is usually done using the Analysis Services Deployment Wizard. However, this option was dismissed for a variety of reasons. Firstly, invoking external processes from SSIS is fraught with problems as (a) it is not always possible to ensure SSIS waits for the external program to terminate; (b) we cannot log the outcome properly and (c) it is not always possible to control the server's configuration to ensure the executable works correctly. Another reason for rejecting the Deployment Wizard is that it requires the 'answers' to be written into four XML files. These XML files record the three things we need to change: the name of the server, the name of the OLAP database and the connection string to the data mart. Although it would be reasonably straight forward to change the content of the XML files programmatically, this adds another set of complication and level of obscurity to the overall process. When I first investigated the possibility of using C# to deploy a cube, I was surprised to find that there are no other blog entries about the topic. I can only assume everyone else is happy with the Deployment Wizard! SSIS "forgets" assembly references If you build your script task from scratch, you will have to remember how to overcome one of the major annoyances of working with SSIS script tasks: the forgetful nature of SSIS when it comes to assembly references. Basically, you can go through the process of adding an assembly reference using the Add Reference dialog, but when you close the script window, SSIS "forgets" the assembly reference so the script will not compile. After repeating the operation several times, you will find that SSIS only remembers the assembly reference when you specifically press the Save All icon in the script window. This problem is not unique to the AMO assembly and has certainly been a "feature" since SQL Server 2005, so I am not amazed it is still present in SQL Server 2008 R2! Sample Package So let's take a look at the sample SSIS package I have provided which can be downloaded from here: DeployOlapCubeExample.zip  Below is a screenshot after a successful run. Connection Managers The package has three connection managers: AsDatabaseDefinitionFile is a file connection manager pointing to the .asdatabase file you wish to deploy. Note that this can be found in the bin directory of you OLAP database project once you have clicked the "Build" button in Visual Studio TargetOlapServerCS is an Analysis Services connection manager which identifies both the deployment server and the target database name. SourceDataMart is an OLEDB connection manager pointing to the data mart which is to act as the source of data for your cube. This will be used to replace the connection string found in your .asdatabase file Once you have configured the connection managers, the sample should run and deploy your OLAP database in a few seconds. Of course, in a production environment, these connection managers would be associated with package configurations or set at runtime. When you run the sample, you should see that the script logs its activity to the output screen (see screenshot above). If you configure logging for the package, then these messages will also appear in your SSIS logging. Sample Code Walkthrough Next let's walk through the code. The first step is to parse the connection string provided by the TargetOlapServerCS connection manager and obtain the name of both the target OLAP server and also the name of the OLAP database. Note that the target database does not have to exist to be referenced in an AS connection manager, so I am using this as a convenient way to define both properties. We now connect to the server and check for the existence of the OLAP database. If it exists, we drop the database so we can re-deploy. svr.Connect(olapServerName); if (svr.Connected) { // Drop the OLAP database if it already exists Database db = svr.Databases.FindByName(olapDatabaseName); if (db != null) { db.Drop(); } // rest of script } Next we start building the XMLA command that will actually perform the deployment. Basically this is a small chuck of XML which we need to wrap around the large .asdatabase file generated by the Visual Studio build process. // Start generating the main part of the XMLA command XmlDocument xmlaCommand = new XmlDocument(); xmlaCommand.LoadXml(string.Format("<Batch Transaction='false' xmlns='http://schemas.microsoft.com/analysisservices/2003/engine'><Alter AllowCreate='true' ObjectExpansion='ExpandFull'><Object><DatabaseID>{0}</DatabaseID></Object><ObjectDefinition/></Alter></Batch>", olapDatabaseName));  Next we need to merge two XML files which we can do by simply using setting the InnerXml property of the ObjectDefinition node as follows: // load OLAP Database definition from .asdatabase file identified by connection manager XmlDocument olapCubeDef = new XmlDocument(); olapCubeDef.Load(Dts.Connections["AsDatabaseDefinitionFile"].ConnectionString); // merge the two XML files by obtain a reference to the ObjectDefinition node oaRootNode.InnerXml = olapCubeDef.InnerXml;   One hurdle I had to overcome was removing detritus from the .asdabase file left by the Visual Studio build. Through an iterative process, I found I needed to remove several nodes as they caused the deployment to fail. The XMLA error message read "Cannot set read-only node: CreatedTimestamp" or similar. In comparing the XMLA generated with by the Deployment Wizard with that generated by my code, these read-only nodes were missing, so clearly I just needed to strip them out. This was easily achieved using XPath to find the relevant XML nodes, of which I show one example below: foreach (XmlNode node in rootNode.SelectNodes("//ns1:CreatedTimestamp", nsManager)) { node.ParentNode.RemoveChild(node); } Now we need to change the database name in both the ID and Name nodes using code such as: XmlNode databaseID = xmlaCommand.SelectSingleNode("//ns1:Database/ns1:ID", nsManager); if (databaseID != null) databaseID.InnerText = olapDatabaseName; Finally we need to change the connection string to point at the relevant data mart. Again this is easily achieved using XPath to search for the relevant nodes and then replace the content of the node with the new name or connection string. XmlNode connectionStringNode = xmlaCommand.SelectSingleNode("//ns1:DataSources/ns1:DataSource/ns1:ConnectionString", nsManager); if (connectionStringNode != null) { connectionStringNode.InnerText = Dts.Connections["SourceDataMart"].ConnectionString; } Finally we need to perform the deployment using the Execute XMLA command and check the returned XmlaResultCollection for errors before setting the Dts.TaskResult. XmlaResultCollection oResults = svr.Execute(xmlaCommand.InnerXml);  // check for errors during deployment foreach (Microsoft.AnalysisServices.XmlaResult oResult in oResults) { foreach (Microsoft.AnalysisServices.XmlaMessage oMessage in oResult.Messages) { if ((oMessage.GetType().Name == "XmlaError")) { FireError(oMessage.Description); HadError = true; } } } If you are not familiar with XML programming, all this may all seem a bit daunting, but perceiver as the sample code is pretty short. If you would like the script to process the OLAP database, simply uncomment the lines in the vicinity of Process method. Of course, you can extend the script to perform your own custom processing and to even synchronize the database to a front-end server. Personally, I like to keep the deployment and processing separate as the code can become overly complex for support staff.If you want to know more, come see my session at the forthcoming SQLBits conference.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Deploying axis2 on glassfish

    - by user115524
    I'm trying to deploy Axis2 v1.6.2 war on Glassfish v3.1.2 and I'm having some problems... I need to develop a few web services and since the main app is beeing served on Glassfish I was hoping to deploy axis2 on it so I could test it. I used Glassfish administration pages to deploy a war downloaded from an apache site, but after pointing the application deployment form to this war I'm getting Error and this is the (long) stack trace: [#|2013-06-27T11:34:40.701+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|WebModule[/axis23403634363287739103]StandardWrapper.Throwable java.lang.ExceptionInInitializerError at org.apache.axis2.transport.http.AxisServlet.initConfigContext(AxisServlet.java:584) at org.apache.axis2.transport.http.AxisServlet.init(AxisServlet.java:454) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1453) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1250) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5093) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5380) at com.sun.enterprise.web.WebModule.start(WebModule.java:498) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:917) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:901) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:733) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:2019) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1669) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:109) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207) at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:724) Caused by: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:310) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685) at org.apache.axis2.deployment.DeploymentEngine.(DeploymentEngine.java:76) ... 68 more |#] -- cut -- ... the end of the stacktrace: [#|2013-06-27T11:34:40.714+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while invoking class com.sun.enterprise.web.WebApplication start method java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. at com.sun.enterprise.web.WebApplication.start(WebApplication.java:138) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207) at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:724) |#] [#|2013-06-27T11:34:40.715+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while loading the app|#] [#|2013-06-27T11:34:40.862+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while loading the app : java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.|#] [#|2013-06-27T11:34:40.875+0200|INFO|glassfish3.1.2|org.glassfish.admingui|_ThreadID=85;_ThreadName=admin-thread-pool-4848(5);|Exception Occurred :Error occurred during deployment: Exception while loading the app : java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.. Please see server.log for more details.|#] Is it even possible to deploy axis2 on glassfish?

    Read the article

  • Interesting things – Twitter annotations and your phone as a web server

    - by jamiet
    I overheard/read a couple of things today that really made me, data junkie that I am, take a step back and think, “Hmmm, yeah, that could be really interesting” and I wanted to make a note of them here so that (a) I could bring them to the attention of anyone that happens to read this and (b) I can maybe come back here in a few years and see if either of these have come to fruition. Your phone as a web server While listening to Jon Udell’s (twitter) “Interviews with Innovators Podcast” today in which he interviewed Herbert Van de Sompel (twitter) about his Momento project. During the interview Jon and Herbert made the following remarks: Jon: [some people] really had this vision of a web of servers, the notion that every node on the internet, every connected entity, is potentially a server and a client…we can see where we’re getting to a point where these endpoint devices we have in our pockets are going to be massively capable and it may be in the not too distant future that significant chunks of the web archive will be cached all over the place including on your own machine… Herbert: wasn’t it Opera who at one point turned your browser into a server? That really got my brain ticking. We all carry a mobile phone with us and therefore we all potentially carry a mobile web server with us as well and to my mind the only thing really stopping that from happening is the capabilities of the phone hardware, the capabilities of the network infrastructure and the will to just bloody do it. Certainly all the standards required for addressing a web server on a phone already exist (to this uninitiated observer DNS and IPv6 seem to solve that problem) so why not? I tweeted about the idea and Rory Street answered back with “why would you want a phone to be a web server?”: Its a fair question and one that I would like to try and answer. Mobile phones are increasingly becoming our window onto the world as we use them to upload messages to Twitter, record our location on FourSquare or interact with our friends on Facebook but in each of these cases some other service is acting as our intermediary; to see what I’m thinking you have to go via Twitter, to see where I am you have to go to FourSquare (I’m using ‘I’ liberally, I don’t actually use FourSquare before you ask). Why should this have to be the case? Why can’t that data be decentralised? Why can’t we be masters of our own data universe? If my phone acted as a web server then I could expose all of that information without needing those intermediary services. I see a time when we can pass around URLs such as the following: http://jamiesphone.net/location/current - Where is Jamie right now? http://jamiesphone.net/location/2010-04-21 – Where was Jamie on 21st April 2010? http://jamiesphone.net/thoughts/current – What’s on Jamie’s mind right now? http://jamiesphone.net/blog – What documents is Jamie sharing with me? http://jamiesphone.net/calendar/next7days – Where is Jamie planning to be over the next 7 days? and those URLs get served off of the phone in our pockets. If we govern that data then we can control who has access to it and (crucially) how long its available for. Want to wipe yourself off the face of the web? its pretty easy if you’re in control of all the data – just turn your phone off. None of this exists today but I look forward to a time when it does. Opera really were onto something last June when they announced Opera Unite (admittedly Unite only works because Opera provide an intermediary DNS-alike system – it isn’t totally decentralised). Opening up Twitter annotations Last week Twitter held their first developer conference called Chirp where they announced an upcoming new feature called ‘Twitter Annotations’; in short this will allow us to attach metadata to a Tweet thus enhancing the tweet itself. Think of it as a richer version of hashtags. To think of it another way Twitter are turning their data into a humongous Entity-Attribute-Value or triple-tuple store. That alone has huge implications both for the web and Twitter as a whole – the ability to enrich that 140 characters data and thus make it more useful is indeed compelling however today I stumbled upon a blog post from Eugene Mandel entitled Tweet Annotations – a Way to a Metadata Marketplace? where he proposed the idea of allowing tweets to have metadata added by people other than the person who tweeted the original tweet. This idea really fascinated me especially when I read some of the potential uses that Eugene and his commenters suggested. They included: Amazon could attach an ISBN to a tweet that mentions a book. Specialist clients apps for book lovers could be built up around this metadata. Advertisers could pay to place adverts in metadata. The revenue generated from those adverts could be shared with the tweeter or people who add the metadata. Granted, allowing anyone to add metadata to a tweet has the potential to create a spam problem the like of which we haven’t even envisaged but spam hasn’t halted the growth of the web and neither should it halt the growth of data annotations either. The original tweeter should of course be able to determine who can add metadata and whether it should be moderated. As Eugene says himself: Opening publishing tweet annotations to anyone will open the way to a marketplace of metadata where client developers, data mining companies and advertisers can add new meaning to Twitter and build innovative businesses. What Eugene and his followers did not mention is what I think is potentially the most fascinating use of opening up annotations. Google’s success today is built on their page rank algorithm that measures the validity of a web page by the number of incoming links to it and the page rank of the sites containing those links – its a system built on reputation. Twitter annotations could open up a new paradigm however – let’s call it People rank- where reputation can be measured by the metadata that people choose to apply to links and the websites containing those links. Its not hard to see why Google and Microsoft have paid big bucks to get access to the Twitter firehose! Neither of these features, phones as a web server or the ability to add annotations to other people’s tweets, exist today but I strongly believe that they could dramatically enhance the web as we know it today. I hope to look back on this blog post in a few years in the knowledge that these ideas have been put into place. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLAuthority News – Community Service and Public Speaking Engagements

    - by pinaldave
    Today is the last day of the year and I was going over my memories for year 2010. Almost all of them are good and I feel for sure better person in terms of knowledge, nature and overall human being. Looking back at the year, it is very satisfying as I was able to go out in public and help community out at various capacity. Thought, most of the time my contribution was as speaker, many times, I have reached out and helped organized event and worked at any capacity to get the event out. I have taken parts in many TechEds, PASS events, Virtual Tech Days, Various Community Events around the Globe and my contribution is not limited to my country only. Overall – I feel good to be part of this wonderful and supportive community. SQLAuthority News – A Successful Community TechDays at Ahmedabad – December 11, 2010 SQLAuthority News – A Successful Performance Tuning Seminar at Pune – Dec 4-5, 2010 SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience SQLAuthority News – Statistics and Best Practices – Virtual Tech Days – Nov 22, 2010 SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010 SQL SERVER – Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology SQLAuthority News – Feedback Received for Virtual Tech Days Sessions on Spatial Database SQLAuthority News – Community Tech Days, Ahmedabad – July 24, 2010 SQLAuthority News – SQL Data Camp, Chennai, July 17, 2010 – A Huge Success SQLAuthority News – 2 Sessions at TechInsight 2010 – June 29 – July 1, 2010 SQLAuthority News – Author Visit – SQL Server 2008 R2 Launch SQLAuthority News – Professional Development and Community SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Opportunity of A Lifetime SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion SQLAuthority News – Meeting with Allen Bailochan Tuladhar – An Unlimited Experience SQLAuthority News – Author Visit Review – TechMela Nepal – March 29-30, 2010 SQLAuthority News – Excellent Event – TechEd Sri Lanka – Feb 8, 2010 SQLAuthority News – Hyderabad Techies February Fever Feb 11, 2010 – Indexing for Performance SQLAuthority News – MUGH – Microsoft User Group Hyderabad – Feb 2, 2010 Session Review SQLAuthority News – Ahmedabad Community Tech Days – Jan 30, 2010 – Huge Success For earlier year’s contribution you can check my webpage over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • Developing custom MBeans to manage J2EE Applications (Part III)

    - by philippe Le Mouel
    This is the third and final part in a series of blogs, that demonstrate how to add management capability to your own application using JMX MBeans. In Part I we saw: How to implement a custom MBean to manage configuration associated with an application. How to package the resulting code and configuration as part of the application's ear file. How to register MBeans upon application startup, and unregistered them upon application stop (or undeployment). How to use generic JMX clients such as JConsole to browse and edit our application's MBean. In Part II we saw: How to add localized descriptions to our MBean, MBean attributes, MBean operations and MBean operation parameters. How to specify meaningful name to our MBean operation parameters. We also touched on future enhancements that will simplify how we can implement localized MBeans. In this third and last part, we will re-write our MBean to simplify how we added localized descriptions. To do so we will take advantage of the functionality we already described in part II and that is now part of WebLogic 10.3.3.0. We will show how to take advantage of WebLogic's localization support to localize our MBeans based on the client's Locale independently of the server's Locale. Each client will see MBean descriptions localized based on his/her own Locale. We will show how to achieve this using JConsole, and also using a sample programmatic JMX Java client. The complete code sample and associated build files for part III are available as a zip file. The code has been tested against WebLogic Server 10.3.3.0 and JDK6. To build and deploy our sample application, please follow the instruction provided in Part I, as they also apply to part III's code and associated zip file. Providing custom descriptions take II In part II we localized our MBean descriptions by extending the StandardMBean class and overriding its many getDescription methods. WebLogic 10.3.3.0 similarly to JDK 7 can automatically localize MBean descriptions as long as those are specified according to the following conventions: Descriptions resource bundle keys are named according to: MBean description: <MBeanInterfaceClass>.mbean MBean attribute description: <MBeanInterfaceClass>.attribute.<AttributeName> MBean operation description: <MBeanInterfaceClass>.operation.<OperationName> MBean operation parameter description: <MBeanInterfaceClass>.operation.<OperationName>.<ParameterName> MBean constructor description: <MBeanInterfaceClass>.constructor.<ConstructorName> MBean constructor parameter description: <MBeanInterfaceClass>.constructor.<ConstructorName>.<ParameterName> We also purposely named our resource bundle class MBeanDescriptions and included it as part of the same package as our MBean. We already followed the above conventions when creating our resource bundle in part II, and our default resource bundle class with English descriptions looks like: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "MBean used to manage persistent application properties"}, {"PropertyConfigMXBean.attribute.Properties", "Properties associated with the running application"}, {"PropertyConfigMXBean.operation.setProperty", "Create a new property, or change the value of an existing property"}, {"PropertyConfigMXBean.operation.setProperty.key", "Name that identify the property to set."}, {"PropertyConfigMXBean.operation.setProperty.value", "Value for the property being set"}, {"PropertyConfigMXBean.operation.getProperty", "Get the value for an existing property"}, {"PropertyConfigMXBean.operation.getProperty.key", "Name that identify the property to be retrieved"} }; } } We have now also added a resource bundle with French localized descriptions: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions_fr extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "Manage proprietes sauvegarde dans un fichier disque."}, {"PropertyConfigMXBean.attribute.Properties", "Proprietes associee avec l'application en cour d'execution"}, {"PropertyConfigMXBean.operation.setProperty", "Construit une nouvelle proprietee, ou change la valeur d'une proprietee existante."}, {"PropertyConfigMXBean.operation.setProperty.key", "Nom de la propriete dont la valeur est change."}, {"PropertyConfigMXBean.operation.setProperty.value", "Nouvelle valeur"}, {"PropertyConfigMXBean.operation.getProperty", "Retourne la valeur d'une propriete existante."}, {"PropertyConfigMXBean.operation.getProperty.key", "Nom de la propriete a retrouver."} }; } } So now we can just remove the many getDescriptions methods from our MBean code, and have a much cleaner: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Map; import java.util.HashMap; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig extends StandardMBean implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; private static Map operationsParamNames_ = null; static { operationsParamNames_ = new HashMap(); operationsParamNames_.put("setProperty", new String[] {"key", "value"}); operationsParamNames_.put("getProperty", new String[] {"key"}); } public PropertyConfig(String relativePath) throws Exception { super(PropertyConfigMXBean.class , true); props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} protected String getParameterName(MBeanOperationInfo op, MBeanParameterInfo param, int sequence) { return operationsParamNames_.get(op.getName())[sequence]; } } The only reason we are still extending the StandardMBean class, is to override the default values for our operations parameters name. If this isn't a concern, then one could just write the following code: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; public PropertyConfig(String relativePath) throws Exception { props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} } Note: The above would also require changing the operations parameters name in the resource bundle classes. For instance: PropertyConfigMXBean.operation.setProperty.key would become: PropertyConfigMXBean.operation.setProperty.p0 Client based localization When accessing our MBean using JConsole started with the following command line: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug We see that our MBean descriptions are localized according to the WebLogic's server Locale. English in this case: Note: Consult Part I for information on how to use JConsole to browse/edit our MBean. Now if we specify the client's Locale as part of the JConsole command line as follow: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -J-Dweblogic.management.remote.locale=fr-FR -debug We see that our MBean descriptions are now localized according to the specified client's Locale. French in this case: We use the weblogic.management.remote.locale system property to specify the Locale that should be associated with the cient's JMX connections. The value is composed of the client's language code and its country code separated by the - character. The country code is not required, and can be omitted. For instance: -Dweblogic.management.remote.locale=fr We can also specify the client's Locale using a programmatic client as demonstrated below: package blog.wls.jmx.appmbean.client; import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.MBeanInfo; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; import javax.management.remote.JMXConnectorFactory; import java.util.Hashtable; import java.util.Set; import java.util.Locale; public class JMXClient { public static void main(String[] args) throws Exception { JMXConnector jmxCon = null; try { JMXServiceURL serviceUrl = new JMXServiceURL( "service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime"); System.out.println("Connecting to: " + serviceUrl); // properties associated with the connection Hashtable env = new Hashtable(); env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); String[] credentials = new String[2]; credentials[0] = "weblogic"; credentials[1] = "weblogic"; env.put(JMXConnector.CREDENTIALS, credentials); // specifies the client's Locale env.put("weblogic.management.remote.locale", Locale.FRENCH); jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env); jmxCon.connect(); MBeanServerConnection con = jmxCon.getMBeanServerConnection(); Set mbeans = con.queryNames( new ObjectName( "blog.wls.jmx.appmbean:name=myAppProperties,type=PropertyConfig,*"), null); for (ObjectName mbeanName : mbeans) { System.out.println("\n\nMBEAN: " + mbeanName); MBeanInfo minfo = con.getMBeanInfo(mbeanName); System.out.println("MBean Description: "+minfo.getDescription()); System.out.println("\n"); } } finally { // release the connection if (jmxCon != null) jmxCon.close(); } } } The above client code is part of the zip file associated with this blog, and can be run using the provided client.sh script. The resulting output is shown below: $ ./client.sh Connecting to: service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime MBEAN: blog.wls.jmx.appmbean:type=PropertyConfig,name=myAppProperties MBean Description: Manage proprietes sauvegarde dans un fichier disque. $ Miscellaneous Using Description annotation to specify MBean descriptions Earlier we have seen how to name our MBean descriptions resource keys, so that WebLogic 10.3.3.0 automatically uses them to localize our MBean. In some cases we might want to implicitly specify the resource key, and resource bundle. For instance when operations are overloaded, and the operation name is no longer sufficient to uniquely identify a single operation. In this case we can use the Description annotation provided by WebLogic as follow: import weblogic.management.utils.Description; @Description(resourceKey="myapp.resources.TestMXBean.description", resourceBundleBaseName="myapp.resources.MBeanResources") public interface TestMXBean { @Description(resourceKey="myapp.resources.TestMXBean.threshold.description", resourceBundleBaseName="myapp.resources.MBeanResources" ) public int getthreshold(); @Description(resourceKey="myapp.resources.TestMXBean.reset.description", resourceBundleBaseName="myapp.resources.MBeanResources") public int reset( @Description(resourceKey="myapp.resources.TestMXBean.reset.id.description", resourceBundleBaseName="myapp.resources.MBeanResources", displayNameKey= "myapp.resources.TestMXBean.reset.id.displayName.description") int id); } The Description annotation should be applied to the MBean interface. It can be used to specify MBean, MBean attributes, MBean operations, and MBean operation parameters descriptions as demonstrated above. Retrieving the Locale associated with a JMX operation from the MBean code There are several cases where it is necessary to retrieve the Locale associated with a JMX call from the MBean implementation. For instance this can be useful when localizing exception messages. This can be done as follow: import weblogic.management.mbeanservers.JMXContextUtil; ...... // some MBean method implementation public String setProperty(String key, String value) throws IOException { Locale callersLocale = JMXContextUtil.getLocale(); // use callersLocale to localize Exception messages or // potentially some return values such a Date .... } Conclusion With this last part we conclude our three part series on how to write MBeans to manage J2EE applications. We are far from having exhausted this particular topic, but we have gone a long way and are now capable to take advantage of the latest functionality provided by WebLogic's application server to write user friendly MBeans.

    Read the article

  • SQLAuthority News – Random Thoughts and Random Ideas

    - by pinaldave
    There are days when I keep on wondering about SQL, and even my life overall. Today is Saturday so I decided to write about SQL Server. Just like any other mornings, I woke up at 5 and opened my blog editor. I usually do not open Twitter or Facebook when I am planning to focus on my work, as they are little distractions for me. But today I opened my Twitter account and came across a very interesting quote from a friend: ‘Can I expect you to be different today?’ Well, I think it was very powerful quote for me to read first thing on a new day. This quote froze me for a while and made me think, “Do I really want to write about an SQL Server tip, or something different?”  After a little thinking, I’ve realized that for today I would go on and write something different. I am going to write about a few of the ideas and thoughts I had yesterday. After writing all these, I realized that if I am thinking so much in a day, and if I write a blog post of my random musing of the week or month, it can be so long (and boring). Here are some of my random thoughts I’d like to share with you: When the airplane lands, why does everybody get up and try to rush out when their luggage would be coming probably 20-30 minutes later? I really do not like this question when it was asked to me: “SQL Server is not using optimal index which I just created – how can I force it?” I am not going elaborate on this statement but you are allowed to in the comment section. Why do some people wish Good Morning even when they meet us after 4 PM? Can I optimize a query so much that it gives me a result before I execute it? Is it corruption when someone does their personal household work at office? The lane where I drive is always the slowest lane. Why waste time on correcting others when there are a lot of pending improvements for ourselves? If I have to get Tattoo, which SQL Server Execution Plan symbol should I get? Why do I reach office so early that the coffee machine is yet running its daily cleaning job? Why does every laptop have a ‘Page Up’ key at different locations on the keyboard? While I like color movies, I really appreciate black and white photographs. I do not appreciate statements like, “If I receive your books in PDF, I will spread it to many people to give you much greater exposure. So would you please send them to me ASAP?” Do not tell me, “Why does the database grow back after shrinking it every day?” I suggest you use “Search this blog” for the explanation. Petrol prices are currently at INR 74. I hope the rate remains there. Let me ask you the same question which started my day today:  “Can I expect you to be different today?” Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • "SignTool error: Access is denied" in TFS 2010 build process

    - by user351352
    I'm getting "SignTool Error: Access is Denied" when I attempt to sign a file. When I use an administrator cmd, all works fine. However, this process is going to be used in a TFS 2010 build process and using the InvokeProcess task with signtool gives the same access denied message as a non-administrator command prompt. More info: On a Win2008 R2 enterprise machine. User is machine admin and on the domain. The TFS Build service is also set to run as this user. Using a self signed certificate created using these instructions: How do I create a self-signed certificate for code signing on Windows? After following these instructions I have the following files: MyCA.cer MyCA.pvk MySPC.cer MySPC.pvk MySPC.pfx MyCA is in my Trusted Root Certification Authorities I imported MySPC.pfx into personal certificates, following the advice here: SignTool error: Access is denied To do the signing I'm using the thumbprint of the MySPC.pfx that was imported into the Personal section so my signtool command looks like: sign /sha1 1e9d7b5ad98552d9c58944e3f3903e6b929f4819 /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" Once again this works in Admin mode. This also works when running cmd as administrator: sign /f "C:\Code Signing Non-Release\MySPC.pfx" /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" New to code signing in general, so any help is welcome.

    Read the article

  • Error starting modern compiler

    - by saloni
    In my servlet , I m using Tomcat 5.0 and JRE is 1.5.0 but it is giving error when I click on the URL . As when I created a war file of my project and deployed in tomcat than it is working fine . It means that only problem with my eclipse configuration ERROR IS : - Apr 5, 2010 3:20:22 PM org.apache.jasper.compiler.Compiler generateClass SEVERE: Javac exception Error starting modern compiler at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:69) at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:942) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:764) at org.apache.jasper.compiler.Compiler.generateClass(Compiler.java:382) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:472) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:451) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:439) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:511) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:295) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:292) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:236) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929) at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705) at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:61) ... 35 more Caused by: java.lang.VerifyError: class com.sun.tools.javac.jvm.Target overrides final method . at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:1634) at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:860) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1307) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1189) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at com.sun.tools.javac.Main.compile(Main.java:42) ... 40 more --- Nested Exception --- java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:61) at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:942) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:764) at org.apache.jasper.compiler.Compiler.generateClass(Compiler.java:382) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:472) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:451) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:439) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:511) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:295) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:292) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:236) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520) at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929) at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705) at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.VerifyError: class com.sun.tools.javac.jvm.Target overrides final method . at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:1634) at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:860) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1307) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1189) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at com.sun.tools.javac.Main.compile(Main.java:42) ... 40 more Apr 5, 2010 3:20:22 PM org.apache.jasper.compiler.Compiler generateClass SEVERE: Env: Compile: javaFileName=/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/work/Catalina/localhost/SampleSaloni//org/apache/jsp/page\form_jsp.java classpath=/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/classes/;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/ant-launcher.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/ant.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-collections-3.1.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-dbcp-1.2.1.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-el.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-pool-1.2.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jasper-compiler.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jasper-runtime.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jsp-api.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-common.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-factory.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-java.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-resources.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/tools.jar;D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\work\Catalina\localhost\SampleSaloni;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/classes/;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/ant-launcher.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/ant.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-collections-3.1.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-dbcp-1.2.1.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-el.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/commons-pool-1.2.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jasper-compiler.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jasper-runtime.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/jsp-api.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-common.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-factory.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-java.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/naming-resources.jar;/D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps/SampleSaloni/WEB-INF/lib/tools.jar;D:/software setups/jakarta-tomcat-5.0.28/common/classes/;D:/software setups/jakarta-tomcat-5.0.28/common/lib/ant-launcher.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/ant.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/commons-collections-3.1.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/commons-dbcp-1.2.1.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/commons-el.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/commons-pool-1.2.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/jasper-compiler.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/jasper-runtime.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/jsp-api.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/naming-common.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/naming-factory.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/naming-java.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/naming-resources.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/servlet-api.jar;D:/software setups/jakarta-tomcat-5.0.28/common/lib/tools.jar;/D:/software%20setups/jakarta-tomcat-5.0.28/bin/bootstrap.jar;/C:/Program%20Files/Java/jre1.5.0_09/lib/ext/dnsns.jar;/C:/Program%20Files/Java/jre1.5.0_09/lib/ext/sunjce_provider.jar;/C:/Program%20Files/Java/jre1.5.0_09/lib/ext/sunpkcs11.jar cp=D:\software setups\jakarta-tomcat-5.0.28\bin\bootstrap.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\classes cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\ant-launcher.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\ant.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-collections-3.1.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-dbcp-1.2.1.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-el.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-pool-1.2.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jasper-compiler.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jasper-runtime.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jsp-api.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-common.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-factory.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-java.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-resources.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\tools.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\work\Catalina\localhost\SampleSaloni cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\classes cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\ant-launcher.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\ant.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-collections-3.1.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-dbcp-1.2.1.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-el.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\commons-pool-1.2.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jasper-compiler.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jasper-runtime.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\jsp-api.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-common.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-factory.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-java.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\naming-resources.jar cp=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\SampleSaloni\WEB-INF\lib\tools.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\classes cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\ant-launcher.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\ant.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\commons-collections-3.1.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\commons-dbcp-1.2.1.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\commons-el.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\commons-pool-1.2.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\jasper-compiler.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\jasper-runtime.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\jsp-api.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\naming-common.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\naming-factory.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\naming-java.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\naming-resources.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\servlet-api.jar cp=D:\software setups\jakarta-tomcat-5.0.28\common\lib\tools.jar cp=D:\software%20setups\jakarta-tomcat-5.0.28\bin\bootstrap.jar cp=C:\Program%20Files\Java\jre1.5.0_09\lib\ext\dnsns.jar cp=C:\Program%20Files\Java\jre1.5.0_09\lib\ext\sunjce_provider.jar cp=C:\Program%20Files\Java\jre1.5.0_09\lib\ext\sunpkcs11.jar work dir=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\work\Catalina\localhost\SampleSaloni extension dir=C:\Program Files\Java\jre1.5.0_09\lib\ext srcDir=D:\OffViv\JAVA_IDE\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\work\Catalina\localhost\SampleSaloni include=org/apache/jsp/page/form_jsp.java Apr 5, 2010 3:20:22 PM org.apache.jasper.compiler.Compiler generateClass SEVERE: Error compiling file: /D:/OffViv/JAVA_IDE/workspace/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/work/Catalina/localhost/SampleSaloni//org/apache/jsp/page\form_jsp.java [javac] Compiling 1 source file

    Read the article

  • Is make -j distcc possible to scale over 5 times?

    - by holmes
    Since distcc cannot keep states and just possible to send jobs and headers and let those servers to use only the data just sent and preprocess and compile, I think the lastest distcc has problem in scalability. In my local build environment which has appx. 10,000 c/c++ files to build, I could only make 2 times faster than not using distcc (but using make -j) when having 20 build servers. What do you think is the problem? If anyone has achieved scalability more than 10 - 20 times using make -j and distcc, please let me know. The following product claims that it is impossible to scale make -j and distcc faster than 5 times. http://www.electric-cloud.com/products/electricaccelerator.php I think this can be improved by: Letting the distccd server to maintain sessions Tied to those sessions, they will cache their own header directories Preprocess will be done demand base from the distccd server This will be done through a LD_PRELOADed library libdistcc.so which will replace stat/open syscalls and fetches the header files over network. ... Has anyone done this kind of thing?

    Read the article

  • Eclipse: What is the minimum Eclipse installation needed for a headless PDE build?

    - by Christoph
    Hi, I am currently using PDE build in headless mode to build my OSGI Bundle project. The PDE Antrunner task uses an Eclipse installation and I am just pointing it to my local Eclipse installation. unfortunatelly my eclipse installation is about 260MB big, but I assume that a PDE build does NOT require all of those plugins in a standard eclipse installation. Does anyone now what is the minimum list of plugins I need for doing a headless PDE build? All of my dependencies I actually have in a custom target platform folder, so I guess the only thing I need from my eclipse installation are the dependencies which PDE build actually needs. But what are those? Can I shrink my installation to a very minimum? My goal is to also check-in this "build-eclipse" folder into my project's SVN so that when you check it out, you have everything you need to start a full build, without touching any build.properties. But I don't want to commit 266MB of eclipse if I maybe need only 20MB of it. Thanks Christoph

    Read the article

  • Android Studio Could not call IncrementalTask.taskAction() on task ':project:dexDebug'

    - by akenawell85x
    I recently decided to switch from Eclipse to Android Studio. I imported a project I was working on and am now getting this error when I try to run the project. Gradle: Execution failed for task ':project:dexDebug'. > Could not call IncrementalTask.taskAction() on task ':project:dexDebug' I've been cruising this site for 2 days now and trying different suggestions to no avail. I did run gradlew compileDebug --stacktrace and this is what I got: C:\Users\adam\AndroidStudioProjects\projectProject>gradlew compileDebug --stacktrace Relying on packaging to define the extension of the main artifact has been deprecated and is scheduled to be removed in Gradle 2.0 :project:preBuild UP-TO-DATE :project:preDebugBuild UP-TO-DATE :project:preReleaseBuild UP-TO-DATE :project:prepareComAndroidSupportAppcompatV71800Library UP-TO-DATE :project:prepareComGoogleAndroidGmsPlayServices3225Library UP-TO-DATE :project:prepareDebugDependencies :project:compileDebugAidl UP-TO-DATE :project:compileDebugRenderscript UP-TO-DATE :project:generateDebugBuildConfig UP-TO-DATE :project:mergeDebugAssets UP-TO-DATE :project:mergeDebugResources UP-TO-DATE :project:processDebugManifest UP-TO-DATE :project:processDebugResources UP-TO-DATE :project:generateDebugSources UP-TO-DATE :project:compileDebug UP-TO-DATE BUILD SUCCESSFUL Total time: 10.459 secs However I am still getting that error when I try to actually run the project. Here is my build.gradle (i do have a 'libs' folder in my project with all the jars for a google maps/places app): buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.+' } } apply plugin: 'android' repositories { mavenCentral() } android { compileSdkVersion 18 buildToolsVersion "18.1.1" defaultConfig { minSdkVersion 8 targetSdkVersion 18 } } dependencies { compile fileTree(dir: 'libs') compile 'com.google.android.gms:play-services:3.2.25' compile 'com.android.support:support-v4:18.0.0' compile 'com.android.support:appcompat-v7:+' } and my settings.gradle: include ':project', ':project:libs:android-support-v4', ':project:libs:google-api-client-1.10.3-beta', ':project:libs:google-api-client-android2-1.10.3-beta', ':project:libs:google-http-client-1.10.3-beta', ':project:libs:google-http-client-android2-1.10.3-beta', ':project:libs:google-oauth-client-1.10.1-beta', ':project:libs:gson-2.1', ':project:libs:guava-11.0.1', ':project:libs:jackson-core-asl-1.9.4', ':project:libs:jsr305-1.3.9', ':project:libs:protobuf-java-2.2.0', ':project:libs:GoogleAdMobAdsSdk-6.4.1' As I said, I've tried pretty much everything I have read on here about this error and am having no luck. Any help would be greatly appreciated.

    Read the article

  • TeamCity Integrated with Xcode Projects (BUILD RUNNER)

    - by alex n
    Im trying to get TeamCity to work with the github server for our xcode projects. I've got the git server working and now i'm stuck at the Build Runner Settings. i downloaded the teamcity-xcode plugin from http://github.com/orj/teamcity-xcode and moved it into the ~/.BuildServer/plugins folder. is there any kind of tutorial how to set up TeamCity for xcode projects ?

    Read the article

  • Why is Assembly.GetCustomAttributes suddenly throwing TypeLoadException on build machine with Silver

    - by andrej351
    A short while back i had to display the current version of our Silverlight app. After some googling the following code gave me the desired result: var fileVersionAttributes = typeof(MyClass).Assembly. GetCustomAttributes(typeof(AssemblyFileVersionAttribute), false) as AssemblyFileVersionAttribute[]; var version = fileVersionAttributes[0].Version; This worked a treat in our .NET 3.5 Silverlight 3 environment. However, we recently upgraded to .NET 4 and Silverlight 4. We just finished getting our build machine working and found that the unit test for this code was throwing the following exception: Exception Message: System.TypeLoadException: Error 0x80131522. Debugging resource strings are unavailable. See http://go.microsoft.com/fwlink/?linkid=106663&Version=3.0.50106.0&File=mscorrc.dll&Key=0x80131522 at System.ModuleHandle.ResolveType(ModuleHandle module, Int32 typeToken, RuntimeTypeHandle* typeInstArgs, Int32 typeInstCount, RuntimeTypeHandle* methodInstArgs, Int32 methodInstCount) at System.ModuleHandle.ResolveTypeHandle(Int32 typeToken, RuntimeTypeHandle[] typeInstantiationContext, RuntimeTypeHandle[] methodInstantiationContext) at System.Reflection.Module.ResolveType(Int32 metadataToken, Type[] genericTypeArguments, Type[] genericMethodArguments) at System.Reflection.CustomAttribute.FilterCustomAttributeRecord(CustomAttributeRecord caRecord, MetadataImport scope, Assembly& lastAptcaOkAssembly, Module decoratedModule, MetadataToken decoratedToken, RuntimeType attributeFilterType, Boolean mustBeInheritable, Object[] attributes, IList derivedAttributes, RuntimeType& attributeType, RuntimeMethodHandle& ctor, Boolean& ctorHasParameters, Boolean& isVarArg) at System.Reflection.CustomAttribute.GetCustomAttributes(Module decoratedModule, Int32 decoratedMetadataToken, Int32 pcaCount, RuntimeType attributeFilterType, Boolean mustBeInheritable, IList derivedAttributes, Boolean isDecoratedTargetSecurityTransparent) at System.Reflection.CustomAttribute.GetCustomAttributes(Module decoratedModule, Int32 decoratedMetadataToken, Int32 pcaCount, RuntimeType attributeFilterType, Boolean isDecoratedTargetSecurityTransparent) at System.Reflection.CustomAttribute.GetCustomAttributes(Assembly assembly, Type caType) at System.Reflection.Assembly.GetCustomAttributes(Type attributeType, Boolean inherit) at MyCode.VersionTest() I have never seen this exception before and the link in it points nowhere. It is only throwing on the build machine and not on my development box, so i'm going through a process of trial and error to see any differences between the two. Any idea why this might be happening?? Cheers, Andrej.

    Read the article

  • Com server build using Python on 64-bit Windows 7 machine

    - by Vijayendra Bapte
    Original post is here: http://mail.python.org/pipermail/python-win32/2010-December/011011.html I am using: OS: 64 bit Windows 7 Professional Python: python-2.7.1.amd64 Python win32 extensions: pywin32-214.win-amd64-py2.7 Py2exe: py2exe-0.6.9.win64-py2.7.amd64 I am trying to build icon overlay for Windows. It has worked fine on 32 bit Windows but not working on 64 bit Windows 7. Here are the Python modules I have created for testing: test_icon_overlay.py: ( http://mail.python.org/pipermail/python-win32/attachments/20101229/bb8c78a4/attachment-0002.obj ) com server created in Python for icon overlay which adds check mark overlay icon(C:\icons\test.ico) on "C:\icons" folder setup_VI.py: ( http://mail.python.org/pipermail/python-win32/attachments/20101229/bb8c78a4/attachment-0003.obj ) setup file which creates test_icon_overlay.dll for distribution. icons.zip: ( http://mail.python.org/pipermail/python-win32/attachments/20101229/bb8c78a4/attachment-0001.zip ) for testing you should extract icons.zip inside C:\ Icon overlay appears on C:\icons folder when I execute python test_icon_overlay.py on Windows command prompt and restarts explorer.exe. But its not working with the dll file created using setup_VI.py I have created dll file using python setup_VI.py py2exe and then tried to register it using regsvr32 test_icon_overlay.dll. Registration fails with windows error message Error 0x80040201 while registering shell extension. Then I turned on logger in Python27/Lib/site-packages/py2exe/boot_com_servers.py and here is the traceback which I am getting in comerror.txt on regsvr32 test_icon_overlay.dll PATH is ['C:\\root\\avalon\\module\\sync\\python\\src\\dist\\library.zip'] Traceback (most recent call last): File "boot_com_servers.py", line 37, in <module> pywintypes.error: (126, 'GetModuleFileName', 'The specified module could not be found.') Traceback (most recent call last): File "<string>", line 1, in <module> NameError: name 'DllRegisterServer' is not defined Looks like there might be a problem with win32api.GetModuleFileName(sys.frozendllhandle) or with the dll build on 64-bit Windows 7. Also, I saw that installation of pywin32-214.win-amd64-py2.7 on 64-bit Windows 7 finish with the error message: Snapshot close failed in file object destructor: sys.excepthook is missing lost sys.stderr Is there anything which I am doing wrong? Any help on this is highly appreciated.

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • noSQL/SQL/RoR: Trying to build scalable ratings table for the game

    - by alexeypro
    I am trying to solve complex thing (as it looks to me). I have next entities: PLAYER (few of them, with names like "John", "Peter", etc.). Each has unique ID. For simplicity let's think it's their name. GAME (few of them, say named "Hide and Seek", "Jump and Run", etc.). Same - each has unique ID. For simplicity of the case let it be it's name for now. SCORE (it's numeric). So, how it works. Each PLAYER can play in multiple GAMES. He gets some SCORE in every GAME. I need to build rating table -- and not one! Table #1: most played GAMES Table #2: best PLAYERS in all games (say the total SCORE in every GAME). Table #3: best PLAYERS per GAME (by SCORE in particularly that GAME). I could be build something straight right away, but that will not work. I will have more than 10,000 players; and 15 games, which will grow for sure. Score can be as low as 0, and as high as 1,000,000 (not sure if higher is possible at this moment) for player in the game. So I really need some relative data. Any suggestions? I am planning to do it with SQL, but may be just using it for key-value storage; anything -- any ideas are welcome. Thank you!

    Read the article

  • how to automate upsizing from Access to SQL Server?

    - by Arne
    Hi, I need to automate the migration from an Access (2003) to an SQL Server DB (2005 or 2008). The upsizing should be done automatically as part of a build process. I need that because there are 2 versions of the software, a single user rich client and a web version. Access DB is used for single user to minimize setup effort, SQL Server to improve performance and scaling with many simultanious users. Access should be the "leading" DB, meaning devs do changes in Access DB and those are propagated to the SQL server within the build process. Many changes will occur, so doing it manually is not an option. I am new to the Microsoft world, so I dont know appropriate tools for that. What tools can I use and how? I know how to do it (by clicking) with the upsizing assistant. Perhaps I can automate that somehow? Thanks in advance for your answers. Cheers, Arne

    Read the article

  • Setting up Django on an internal server (os.environ() not working as expected?)

    - by monkut
    I'm trying to setup Django on an internal company server. (No external connection to the Internet.) Looking over the server setup documentation it appears that the "Running Django on a shared-hosting provider with Apache" method seems to be the most-likely to work in this situation. Here's the server information: Can't install mod_python no root access Server is SunOs 5.6 Python 2.5 Apache/2.0.46 I've installed Django (and flup) using the --prefix option (reading again I probably should've used --home, but at the moment it doesn't seem to matter) I've added the .htaccess file and mysite.fcgi file to my root web directory as mentioned here. When I run the mysite.fcgi script from the server I get my expected output (the correct site HTML output). But, it won't when trying to access it from a browser. It seems that it may be a problem with the PYTHONPATH setting since I'm using the prefix option. I've noticed that if I run mysite.fcgi from the command-line without setting the PYTHONPATH enviornment variable it throws the following error: prompt$ python2.5 mysite.fcgi ERROR: No module named flup Unable to load the flup package. In order to run django as a FastCGI application, you will need to get flup from http://www.saddi.com/software/flup/ If you've already installed flup, then make sure you have it in your PYTHONPATH. I've added sys.path.append(prefixpath) and os.environ['PYTHONPATH'] = prefixpath to mysite.fcgi, but if I set the enviornment variable to be empty on the command-line then run mysite.fcgi, I still get the above error. Here are some command-line results: >>> os.environ['PYTHONPATH'] = 'Null' >>> >>> os.system('echo $PYTHONPATH') Null >>> os.environ['PYTHONPATH'] = '/prefix/path' >>> >>> os.system('echo $PYTHONPATH') /prefix/path >>> exit() prompt$ echo $PYTHONPATH Null It looks like Python is setting the variable OK, but the variable is only applicable inside of the script. Flup appears to be distributed as an .egg file, and my guess is that the egg implementation doesn't take into account variables added by os.environ['key'] = value (?) at least when installing via the --prefix option. I'm not that familiar with .pth files, but it seems that the easy-install.pth file is the one that points to flup: import sys; sys.__plen = len(sys.path) ./setuptools-0.6c6-py2.5.egg ./flup-1.0.1-py2.5.egg import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sy s.path[p:p]=new; sys.__egginsert = p+len(new) It looks like it's doing something funky, anyway to edit this or add something to my code so it will find flup?

    Read the article

  • SqlMetal, Sql Server 2008 database, Table with HierachyID, dal cs file is created sometimes ?

    - by judek.mp
    I have 2 databases with a 2 tables with HierachyID fields. For one database I can get a dal cs file, for the other database I cannot get a dal cs file ? HBus is a database I can get the dal cs for, ... SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext This kicks me out a file, ... which works, but excludes the HierarchyID field for the table and includes all other fields for that table. This is OK I do not mind. The above cmd line kicks out an warning but still produces a file, like so SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.HMsg' from SqlServer because the column's DbType is a user-defined type (UDT). Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.vwHMsg' from SqlServer because the column's DbType is a user-defined type (UDT). HMsg is a table with a HierarchyID field. I have another database, Elf, almost the same thing but I get a warning and an Error when using sql metal and I do not get a dal cs file ... SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext An error as well as the warning and the cs file fails to appear on my disc, ... :-( SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.EntityLink' from SqlServer because the column's DbType is a user-defined type (UDT). Error : Requested value 'ELF.SYS.HIERARCHYID' was not found. The fields are declared the same way in Elf db OrgNode [HierarchyID] null , in HBus db ... OrgNode [HierarchyID] null , Both databases are in the same instance of sql server 2008, so the HierarchyID is an inbuilt type, neither db has HierarchyID udt ,... cheers in advance for any replies ...

    Read the article

  • Why do I get "file is not of required architecture" when I try to build my app on an iphone?

    - by Dale
    My app seemingly runs fine in the simulator but the first time I hooked a phone up to my system and had it build for it I got a huge error log with things like: Build SCCUI of project SCCUI with configuration Debug CompileXIB HandleAlert.xib cd /Users/gdbriggs/Desktop/SCCUI setenv IBC_MINIMUM_COMPATIBILITY_VERSION 3.1 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr /bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/usr/bin/ibtool --errors --warnings --notices --output-format human-readable-text --compile /Users/gdbriggs/Desktop/SCCUI/build/Debug-iphoneos/SCCUI.app/HandleAlert.nib /Users/gdbriggs/Desktop/SCCUI/HandleAlert.xib /* com.apple.ibtool.document.warnings */ /Users/gdbriggs/Desktop/SCCUI/HandleAlert.xib:13: warning: UITextView does not support data detectors when the text view is editable. Ld build/Debug-iphoneos/SCCUI.app/SCCUI normal armv6 cd /Users/gdbriggs/Desktop/SCCUI setenv IPHONEOS_DEPLOYMENT_TARGET 3.1 setenv MACOSX_DEPLOYMENT_TARGET 10.5 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.sdk -L/Users/gdbriggs/Desktop/SCCUI/build/Debug-iphoneos -F/Users/gdbriggs/Desktop/SCCUI/build/Debug-iphoneos -F/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks -filelist /Users/gdbriggs/Desktop/SCCUI/build/SCCUI.build/Debug-iphoneos/SCCUI.build/Objects-normal/armv6/SCCUI.LinkFileList -mmacosx-version-min=10.5 -dead_strip -miphoneos-version-min=3.1 -framework Foundation -framework UIKit -framework CoreGraphics -framework MessageUI -o /Users/gdbriggs/Desktop/SCCUI/build/Debug-iphoneos/SCCUI.app/SCCUI ld: warning: in /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks/Foundation.framework/Foundation, file is not of required architecture ld: warning: in /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks/UIKit.framework/UIKit, file is not of required architecture ld: warning: in /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics, file is not of required architecture ld: warning: in /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks/MessageUI.framework/MessageUI, file is not of required architecture Undefined symbols: "_OBJC_CLASS_$_UIDevice", referenced from: __objc_classrefs__DATA@0 in SCAuthenticationHandler.o "_OBJC_CLASS_$_NSString", referenced from: __objc_classrefs__DATA@0 in CCProxy.o __objc_classrefs__DATA@0 in AlertSummaryViewController.o __objc_classrefs__DATA@0 in HomeLevelController.o __objc_classrefs__DATA@0 in SCAuthenticationHandler.o __objc_classrefs__DATA@0 in SCRequestHandler.o "_UIApplicationMain", referenced from: _main in main.o "_objc_msgSend", referenced from: _main in main.o _main in main.o _main in main.o -[SCCUIAppDelegate applicationDidFinishLaunching:] in and it just keeps going. At / near the bottom it says: ld: symbol(s) not found collect2: ld returned 1 exit status What am I doing wrong?

    Read the article

  • d:DesignData issue, Visual Studio 2010 cant build after adding sample design data with Expression Bl

    - by Valko
    Hi, VS 2010 solution and Silverlight project builds fine, then: I open MyView.xaml view in Expression Blend 4 Add sample data from class (I use my class defined in the same project) after I add new sample design data with Expression blend 4, everything looks fine, you see the added sample data in the EB 4 fine, you also see the data in VS 2010 designer too. Close the EB 4, and next VS 2010 build is giving me this errors: Error 7 XAML Namespace http://schemas.microsoft.com/expression/blend/2008 is not resolved. C:\Code\source\...myview.xaml and: Error 12 Object reference not set to an instance of an object. ... TestSampleData.xaml when I open the TestSampleData.xaml I see that namespace for my class used to define sample data is not recognized. However this namespace and the class itself exist in the same project! If I remove the design data from the MyView.xaml: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" it builds fine and the namespace in TestSampleData.xaml is recognized this time?? and then if add: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" I again see in the VS 2010 designer sample data, but the next build fails and again I see studio cant find the namespace in my TestSampleData.xaml containing sample data. That cycle is driving me crazy. Am I missing something here, is it not possible to have your class defining sample design data in the same project you have the MyView.xaml view?? cheers Valko

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >