Search Results

Search found 15353 results on 615 pages for 'native methods'.

Page 478/615 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Calculated Columns in Entity Framework Code First Migrations

    - by David Paquette
    I had a couple people ask me about calculated properties / columns in Entity Framework this week.  The question was, is there a way to specify a property in my C# class that is the result of some calculation involving 2 properties of the same class.  For example, in my database, I store a FirstName and a LastName column and I would like a FullName property that is computed from the FirstName and LastName columns.  My initial answer was: 1: public string FullName 2: { 3: get { return string.Format("{0} {1}", FirstName, LastName); } 4: } Of course, this works fine, but this does not give us the ability to write queries using the FullName property.  For example, this query: 1: var users = context.Users.Where(u => u.FullName.Contains("anan")); Would result in the following NotSupportedException: The specified type member 'FullName' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. It turns out there is a way to support this type of behavior with Entity Framework Code First Migrations by making use of Computed Columns in SQL Server.  While there is no native support for computed columns in Code First Migrations, we can manually configure our migration to use computed columns. Let’s start by defining our C# classes and DbContext: 1: public class UserProfile 2: { 3: public int Id { get; set; } 4: 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: 8: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 9: public string FullName { get; private set; } 10: } 11: 12: public class UserContext : DbContext 13: { 14: public DbSet<UserProfile> Users { get; set; } 15: } The DatabaseGenerated attribute is needed on our FullName property.  This is a hint to let Entity Framework Code First know that the database will be computing this property for us. Next, we need to run 2 commands in the Package Manager Console.  First, run Enable-Migrations to enable Code First Migrations for the UserContext.  Next, run Add-Migration Initial to create an initial migration.  This will create a migration that creates the UserProfile table with 3 columns: FirstName, LastName, and FullName.  This is where we need to make a small change.  Instead of allowing Code First Migrations to create the FullName property, we will manually add that column as a computed column. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: CreateTable( 6: "dbo.UserProfiles", 7: c => new 8: { 9: Id = c.Int(nullable: false, identity: true), 10: FirstName = c.String(), 11: LastName = c.String(), 12: //FullName = c.String(), 13: }) 14: .PrimaryKey(t => t.Id); 15: Sql("ALTER TABLE dbo.UserProfiles ADD FullName AS FirstName + ' ' + LastName"); 16: } 17: 18: 19: public override void Down() 20: { 21: DropTable("dbo.UserProfiles"); 22: } 23: } Finally, run the Update-Database command.  Now we can query for Users using the FullName property and that query will be executed on the database server.  However, we encounter another potential problem. Since the FullName property is calculated by the database, it will get out of sync on the object side as soon as we make a change to the FirstName or LastName property.  Luckily, we can have the best of both worlds here by also adding the calculation back to the getter on the FullName property: 1: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 2: public string FullName 3: { 4: get { return FirstName + " " + LastName; } 5: private set 6: { 7: //Just need this here to trick EF 8: } 9: } Now we can both query for Users using the FullName property and we also won’t need to worry about the FullName property being out of sync with the FirstName and LastName properties.  When we run this code: 1: using(UserContext context = new UserContext()) 2: { 3: UserProfile userProfile = new UserProfile {FirstName = "Chanandler", LastName = "Bong"}; 4: 5: Console.WriteLine("Before saving: " + userProfile.FullName); 6: 7: context.Users.Add(userProfile); 8: context.SaveChanges(); 9:  10: Console.WriteLine("After saving: " + userProfile.FullName); 11:  12: UserProfile chanandler = context.Users.First(u => u.FullName == "Chanandler Bong"); 13: Console.WriteLine("After reading: " + chanandler.FullName); 14:  15: chanandler.FirstName = "Chandler"; 16: chanandler.LastName = "Bing"; 17:  18: Console.WriteLine("After changing: " + chanandler.FullName); 19:  20: } We get this output: It took a bit of work, but finally Chandler’s TV Guide can be delivered to the right person. The obvious downside to this implementation is that the FullName calculation is duplicated in the database and in the UserProfile class. This sample was written using Visual Studio 2012 and Entity Framework 5. Download the source code here.

    Read the article

  • JSON error Caused by: java.lang.NullPointerException

    - by user3821853
    im trying to make a register page on android using JSON. everytime i press register button on avd, i get an error "unfortunately database has stopped". i have a error on my logcat that i cannot understand. this my code. please someone help me. this my register.java import android.app.Activity; import android.app.ProgressDialog; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import java.util.ArrayList; import java.util.List; public class Register extends Activity implements OnClickListener{ private EditText user, pass; private Button mRegister; // Progress Dialog private ProgressDialog pDialog; // JSON parser class JSONParser jsonParser = new JSONParser(); //php register script //localhost : //testing on your device //put your local ip instead, on windows, run CMD > ipconfig //or in mac's terminal type ifconfig and look for the ip under en0 or en1 // private static final String REGISTER_URL = "http://xxx.xxx.x.x:1234/webservice/register.php"; //testing on Emulator: private static final String REGISTER_URL = "http://10.0.2.2:1234/webservice/register.php"; //testing from a real server: //private static final String REGISTER_URL = "http://www.mybringback.com/webservice/register.php"; //ids private static final String TAG_SUCCESS = "success"; private static final String TAG_MESSAGE = "message"; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.register); user = (EditText)findViewById(R.id.username); pass = (EditText)findViewById(R.id.password); mRegister = (Button)findViewById(R.id.register); mRegister.setOnClickListener(this); } @Override public void onClick(View v) { // TODO Auto-generated method stub new CreateUser().execute(); } class CreateUser extends AsyncTask<String, String, String> { @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(Register.this); pDialog.setMessage("Creating User..."); pDialog.setIndeterminate(false); pDialog.setCancelable(true); pDialog.show(); } @Override protected String doInBackground(String... args) { // TODO Auto-generated method stub // Check for success tag int success; String username = user.getText().toString(); String password = pass.getText().toString(); try { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("username", username)); params.add(new BasicNameValuePair("password", password)); Log.d("request!", "starting"); //Posting user data to script JSONObject json = jsonParser.makeHttpRequest( REGISTER_URL, "POST", params); // full json response Log.d("Registering attempt", json.toString()); // json success element success = json.getInt(TAG_SUCCESS); if (success == 1) { Log.d("User Created!", json.toString()); finish(); return json.getString(TAG_MESSAGE); }else{ Log.d("Registering Failure!", json.getString(TAG_MESSAGE)); return json.getString(TAG_MESSAGE); } } catch (JSONException e) { e.printStackTrace(); } return null; } protected void onPostExecute(String file_url) { // dismiss the dialog once product deleted pDialog.dismiss(); if (file_url != null){ Toast.makeText(Register.this, file_url, Toast.LENGTH_LONG).show(); } } } } this is JSONparser.java import android.util.Log; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpPost; import org.apache.http.client.utils.URLEncodedUtils; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONException; import org.json.JSONObject; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import java.util.List; public class JSONParser { static InputStream is = null; static JSONObject jObj = null; static String json = ""; // constructor public JSONParser() { } public JSONObject getJSONFromUrl(final String url) { // Making HTTP request try { // Construct the client and the HTTP request. DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); // Execute the POST request and store the response locally. HttpResponse httpResponse = httpClient.execute(httpPost); // Extract data from the response. HttpEntity httpEntity = httpResponse.getEntity(); // Open an inputStream with the data content. is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { // Create a BufferedReader to parse through the inputStream. BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); // Declare a string builder to help with the parsing. StringBuilder sb = new StringBuilder(); // Declare a string to store the JSON object data in string form. String line = null; // Build the string until null. while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } // Close the input stream. is.close(); // Convert the string builder data to an actual string. json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // Try to parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // Return the JSON Object. return jObj; } // function get json from url // by making HTTP POST or GET mehtod public JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { // Making HTTP request try { // check for request method if(method == "POST"){ // request method is POST // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); }else if(method == "GET"){ // request method is GET DefaultHttpClient httpClient = new DefaultHttpClient(); String paramString = URLEncodedUtils.format(params, "utf-8"); url += "?" + paramString; HttpGet httpGet = new HttpGet(url); HttpResponse httpResponse = httpClient.execute(httpGet); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } and this my error 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/Buffer Error? Error converting result java.lang.NullPointerException: lock == null 08-18 23:40:02.381 2000-2018/com.example.blackcustomzier.database E/JSON Parser? Error parsing data org.json.JSONException: End of input at character 0 of 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database W/dalvikvm? threadid=15: thread exiting with uncaught exception (group=0xb0f37648) 08-18 23:40:02.391 2000-2018/com.example.blackcustomzier.database E/AndroidRuntime? FATAL EXCEPTION: AsyncTask #4 java.lang.RuntimeException: An error occured while executing doInBackground() at android.os.AsyncTask$3.done(AsyncTask.java:299) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) at java.util.concurrent.FutureTask.setException(FutureTask.java:219) at java.util.concurrent.FutureTask.run(FutureTask.java:239) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) at java.lang.Thread.run(Thread.java:841) Caused by: java.lang.NullPointerException at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:108) at com.example.blackcustomzier.database.Register$CreateUser.doInBackground(Register.java:74) at android.os.AsyncTask$2.call(AsyncTask.java:287) at java.util.concurrent.FutureTask.run(FutureTask.java:234)             at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230)             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080)             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573)             at java.lang.Thread.run(Thread.java:841) 08-18 23:40:02.501 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.591 2000-2000/com.example.blackcustomzier.database W/EGL_emulation? eglSurfaceAttrib not implemented 08-18 23:40:02.981 2000-2000/com.example.blackcustomzier.database E/WindowManager? Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here android.view.WindowLeaked: Activity com.example.blackcustomzier.database.Register has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView{b1294c60 V.E..... R......D 0,0-1026,288} that was originally added here at android.view.ViewRootImpl.<init>(ViewRootImpl.java:345) at android.view.WindowManagerGlobal.addView(WindowManagerGlobal.java:239) at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:69) at android.app.Dialog.show(Dialog.java:281) at com.example.blackcustomzier.database.Register$CreateUser.onPreExecute(Register.java:85) at android.os.AsyncTask.executeOnExecutor(AsyncTask.java:586) at android.os.AsyncTask.execute(AsyncTask.java:534) at com.example.blackcustomzier.database.Register.onClick(Register.java:70) at android.view.View.performClick(View.java:4240) at android.view.View.onKeyUp(View.java:7928) at android.widget.TextView.onKeyUp(TextView.java:5606) at android.view.KeyEvent.dispatch(KeyEvent.java:2647) at android.view.View.dispatchKeyEvent(View.java:7343) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchKeyEvent(PhoneWindow.java:1933) at com.android.internal.policy.impl.PhoneWindow.superDispatchKeyEvent(PhoneWindow.java:1408) at android.app.Activity.dispatchKeyEvent(Activity.java:2384) at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchKeyEvent(PhoneWindow.java:1860) at android.view.ViewRootImpl$ViewPostImeInputStage.processKeyEvent(ViewRootImpl.java:3791) at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:3774) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3483) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:3540) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3406) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3379) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3429) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3398) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3516) at android.view.ViewRootImpl$ImeInputStage.onFinishedInputEvent(ViewRootImpl.java:3666) at android.view.inputmethod.InputMethodManager$PendingEvent.run(InputMethodManager.java:1982) at android.view.inputmethod.InputMethodManager.invokeFinishedInputEventCallback(InputMethodManager.java:1698) at android.view.inputmethod.InputMethodManager.finishedInputEvent(InputMethodManager.java:1689) at android.view.inputmethod.InputMethodManager$ImeInputEventSender.onInputEventFinished(InputMethodManager.java:1959) at android.view.InputEventSender.dispatchInputEventFinished(InputEventSender.java:141) at android.os.MessageQueue.nativePollOnce(Native Method) at android.os.MessageQueue.next(MessageQueue.java:132) at android.os.Looper.loop(Looper.java:124) at android.app.ActivityThread.main(ActivityThread.java:5103) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:525) at com.android.internal.os.ZygoteInit$MethodAndArgsCal please help me to solve this thx

    Read the article

  • Passing data between android ListActivities in Java

    - by Will Janes
    I am new to Android! I am having a problem getting this code to work... Basically I Go from one list activity to another and pass the text from a list item through the intent of the activity to the new list view, then retrieve that text in the new list activity and then preform a http request based on value of that list item. Log Cat 04-05 17:47:32.370: E/AndroidRuntime(30135): FATAL EXCEPTION: main 04-05 17:47:32.370: E/AndroidRuntime(30135): java.lang.ClassCastException:android.widget.LinearLayout 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.thickcrustdesigns.ufood.CatogPage$1.onItemClick(CatogPage.java:66) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.AdapterView.performItemClick(AdapterView.java:284) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.ListView.performItemClick(ListView.java:3731) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.AbsListView$PerformClick.run(AbsListView.java:1959) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Handler.handleCallback(Handler.java:587) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Handler.dispatchMessage(Handler.java:92) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Looper.loop(Looper.java:130) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.app.ActivityThread.main(ActivityThread.java:3691) 04-05 17:47:32.370: E/AndroidRuntime(30135): at java.lang.reflect.Method.invokeNative(Native Method) 04-05 17:47:32.370: E/AndroidRuntime(30135): at java.lang.reflect.Method.invoke(Method.java:507) 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 04-05 17:47:32.370: E/AndroidRuntime(30135): at dalvik.system.NativeStart.main(Native Method) ListActivity 1 package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.Button; import android.widget.ListView; import android.widget.TextView; public class CatogPage extends ListActivity { ListView listView1; Button btn_bk; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.definition_main); btn_bk = (Button) findViewById(R.id.btn_bk); listView1 = (ListView) findViewById(android.R.id.list); ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "categories")); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); ListView lv = getListView(); lv.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { TextView tv = (TextView) view; String p = tv.getText().toString(); Intent i = new Intent(getApplicationContext(), Results.class); i.putExtra("category", p); startActivity(i); } }); btn_bk.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { Intent i = new Intent(getApplicationContext(), UFoodAppActivity.class); startActivity(i); } }); } } **ListActivity 2** package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.os.Bundle; import android.widget.ListView; public class Results extends ListActivity { ListView listView1; enum Category { Chicken, Beef, Chinese, Cocktails, Curry, Deserts, Fish, ForOne { public String toString() { return "For One"; } }, Lamb, LightBites { public String toString() { return "Light Bites"; } }, Pasta, Pork, Vegetarian } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.definition_main); listView1 = (ListView) findViewById(android.R.id.list); Bundle data = getIntent().getExtras(); String category = data.getString("category"); Category cat = Category.valueOf(category); String value = null; switch (cat) { case Chicken: value = "Chicken"; break; case Beef: value = "Beef"; break; case Chinese: value = "Chinese"; break; case Cocktails: value = "Cocktails"; break; case Curry: value = "Curry"; break; case Deserts: value = "Deserts"; break; case Fish: value = "Fish"; break; case ForOne: value = "ForOne"; break; case Lamb: value = "Lamb"; break; case LightBites: value = "LightBites"; break; case Pasta: value = "Pasta"; break; case Pork: value = "Pork"; break; case Vegetarian: value = "Vegetarian"; } ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "category")); nvp.add(new BasicNameValuePair("cat", value)); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); } } Request package com.thickcrustdesigns.ufood; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONArray; import org.json.JSONObject; import android.content.Context; import android.util.Log; import android.widget.Toast; public class Request { @SuppressWarnings("null") public static ArrayList<JSONObject> fetchData(Context context, ArrayList<NameValuePair> nvp) { ArrayList<JSONObject> listItems = new ArrayList<JSONObject>(); InputStream is = null; try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost( "http://co350-11d.projects02.glos.ac.uk/php/database.php"); httppost.setEntity(new UrlEncodedFormEntity(nvp)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); } catch (Exception e) { Log.e("log_tag", "Error in http connection" + e.toString()); } // convert response to string String result = ""; try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); InputStream stream = null; StringBuilder sb = null; while ((result = reader.readLine()) != null) { sb.append(result + "\n"); } stream.close(); result = sb.toString(); } catch (Exception e) { Log.e("log_tag", "Error converting result " + e.toString()); } try { JSONArray jArray = new JSONArray(result); for (int i = 0; i < jArray.length(); i++) { JSONObject jo = jArray.getJSONObject(i); listItems.add(jo); } } catch (Exception e) { Toast.makeText(context.getApplicationContext(), "None Found!", Toast.LENGTH_LONG).show(); } return listItems; } } Any help would be grateful! Many Thanks EDIT Sorry very tired so missed out my 2nd ListActivity package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.os.Bundle; import android.widget.ListView; public class Results extends ListActivity { ListView listView1; enum Category { Chicken, Beef, Chinese, Cocktails, Curry, Deserts, Fish, ForOne { public String toString() { return "For One"; } }, Lamb, LightBites { public String toString() { return "Light Bites"; } }, Pasta, Pork, Vegetarian } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.definition_main); listView1 = (ListView) findViewById(android.R.id.list); Bundle data = getIntent().getExtras(); String category = data.getString("category"); Category cat = Category.valueOf(category); String value = null; switch (cat) { case Chicken: value = "Chicken"; break; case Beef: value = "Beef"; break; case Chinese: value = "Chinese"; break; case Cocktails: value = "Cocktails"; break; case Curry: value = "Curry"; break; case Deserts: value = "Deserts"; break; case Fish: value = "Fish"; break; case ForOne: value = "ForOne"; break; case Lamb: value = "Lamb"; break; case LightBites: value = "LightBites"; break; case Pasta: value = "Pasta"; break; case Pork: value = "Pork"; break; case Vegetarian: value = "Vegetarian"; } ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "category")); nvp.add(new BasicNameValuePair("cat", value)); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); } } Sorry again! Cheers guys!

    Read the article

  • Is this proper OO design for C++?

    - by user121917
    I recently took a software processes course and this is my first time attempting OO design on my own. I am trying to follow OO design principles and C++ conventions. I attempted and gave up on MVC for this application, but I am trying to "decouple" my classes such that they can be easily unit-tested and so that I can easily change the GUI library used and/or the target OS. At this time, I have finished designing classes but have not yet started implementing methods. The function of the software is to log all packets sent and received, and display them on the screen (like WireShark, but for one local process only). The software accomplishes this by hooking the send() and recv() functions in winsock32.dll, or some other pair of analogous functions depending on what the intended Target is. The hooks add packets to SendPacketList/RecvPacketList. The GuiLogic class starts a thread which checks for new packets. When new packets are found, it utilizes the PacketFilter class to determine the formatting for the new packet, and then sends it to MainWindow, a native win32 window (with intent to later port to Qt).1 Full size image of UML class diagram Here are my classes in skeleton/header form (this is my actual code): class PacketModel { protected: std::vector<byte> data; int id; public: PacketModel(); PacketModel(byte* data, unsigned int size); PacketModel(int id, byte* data, unsigned int size); int GetLen(); bool IsValid(); //len >= sizeof(opcode_t) opcode_t GetOpcode(); byte* GetData(); //returns &(data[0]) bool GetData(byte* outdata, int maxlen); void SetData(byte* pdata, int len); int GetId(); void SetId(int id); bool ParseData(char* instr); bool StringRepr(char* outstr); byte& operator[] (const int index); }; class SendPacket : public PacketModel { protected: byte* returnAddy; public: byte* GetReturnAddy(); void SetReturnAddy(byte* addy); }; class RecvPacket : public PacketModel { protected: byte* callAddy; public: byte* GetCallAddy(); void SetCallAddy(byte* addy); }; //problem: packets may be added to list at any time by any number of threads //solution: critical section associated with each packet list class Synch { public: void Enter(); void Leave(); }; template<class PacketType> class PacketList { private: static const int MAX_STORED_PACKETS = 1000; public: static const int DEFAULT_SHOWN_PACKETS = 100; private: vector<PacketType> list; Synch synch; //wrapper for critical section public: void AddPacket(PacketType* packet); PacketType* GetPacket(int id); int TotalPackets(); }; class SendPacketList : PacketList<SendPacket> { }; class RecvPacketList : PacketList<RecvPacket> { }; class Target //one socket { bool Send(SendPacket* packet); bool Inject(RecvPacket* packet); bool InitSendHook(SendPacketList* sendList); bool InitRecvHook(RecvPacketList* recvList); }; class FilterModel { private: opcode_t opcode; int colorID; bool bFilter; char name[41]; }; class FilterFile { private: FilterModel filter; public: void Save(); void Load(); FilterModel* GetFilter(opcode_t opcode); }; class PacketFilter { private: FilterFile filters; public: bool IsFiltered(opcode_t opcode); bool GetName(opcode_t opcode, char* namestr); //return false if name does not exist COLORREF GetColor(opcode_t opcode); //return default color if no custom color }; class GuiLogic { private: SendPacketList sendList; RecvPacketList recvList; PacketFilter packetFilter; void GetPacketRepr(PacketModel* packet); void ReadNew(); void AddToWindow(); public: void Refresh(); //called from thread void GetPacketInfo(int id); //called from MainWindow }; I'm looking for a general review of my OO design, use of UML, and use of C++ features. I especially just want to know if I'm doing anything considerably wrong. From what I've read, design review is on-topic for this site (and off-topic for the Code Review site). Any sort of feedback is greatly appreciated. Thanks for reading this.

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • BIP and Mapviewer Mash Up I

    - by Tim Dexter
    I was out in Yellowstone last week soaking up various wildlife and a bit too much rain ... good to be back until the 95F heat yesterday. Taking a little break from the Excel templates; the dev folks are planing an Excel patch in the next week or so that will add a mass of new functionality. At the risk of completely mis leading you I'm going to hang back a while. What I have written so far holds true and will continue to do so. This week, I have been mostly eating 'mapviewer' ... answers on a post card please, TV show and character. I had a request to show how BIP can call mapviewer and render a dynamic map in an output. So I hit the books and colleagues for some answers. Mapviewer is Oracle's geographic information system, hereby known as GIS. I use it a lot in our BIEE demos where the interaction with the maps is very impressive. Need a map of California and its congressional districts? I have contacts; Jerry and David with their little black box of maps. Once in my possession I can build highly interactive, clickable maps that allow the user to drill into more information using a very friendly interface driving BIEE content and navigation. But what about maps in BIP output? Bryan Wise, who has written some articles on this blog did some work a while back with the PL/SQL API interface. The extract for the report called a function that in turn called the mapviewer server, passing a set of mapping requirements, it then returned a URL to a cached copy of that map. Easy to then have BIP render that image. Thats still very doable. You need to install a couple of packages and then load the mapviewer java APIs into the database. Then you can write your function to the APIs. A little involved? Maybe, but the database is doing all the heavy lifting for you. I thought I would investigate another method for getting the maps back into BIP. There is a URL interface you can call, this involves building an XML message to be passed to the mapviewer server. It's pretty straightforward to use on the mapviewer side. On the BIP side things are little more tricksy. After some unexpected messing about I finally got the ubiquitous Hello World map to render using the URL method. Not the most exciting map in the world, lots of ocean and a rather long URL to get it to render. http://127.0.0.1:9704/mapviewer/omserver?xml_request=%3Cmap_request%20title=%22Hello%20World%22%20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E Notice all of the encoding in the URL string to handle the spaces, quotes, etc. All necessary to get BIP to make the call to the mapviewer server correctly without truncating the URL if it hits a real space rather than a %20. With that in mind constructing the URL was pretty simple. I'm not going to get into the content of the URL too much, for that you need to bone up on the mapviewer XML API. Check out the home page here and the documentation here. To make the template portable I used the standard CURRENT_SERVER_URL parameter from the BIP server and declared that in my template. <?param@begin:CURRENT_SERVER_URL;'myserver'?> Ignore the 'myserver', that was just a dummy value for testing at runtime it will resolve to: 'http://yourserver:port/xmlpserver' Not quite what we need as mapviewer has its own server path, in my case I needed 'mapviewer/omserver?xml_request=' as the fixed path to the mapviewer request URL. A little concatenation and substringing later I came up with <?param@begin:mURL;concat(substring($CURRENT_SERVER_URL,1,22),'mapviewer/omserver?xml_request=')?> Thats the basic URL that I can then build on. To get the Hello World map I need to add the following: <map_request title="Hello World" datasource="cagis" format="GIF_STREAM"/> Those angle brackets were the source of my headache, BIPs XSLT engine was attempting to process them rather than just pass them. Hok Min to the rescue ... again. I owe him lunch when I get out to HQ again! To solve the problem, I needed to escape all the characters and white space and then use native XSL to assign the string to a parameter. <xsl:param xdofo:ctx="begin"name="pXML">%3Cmap_request%20title=%22Hello%20World%22 %20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E</xsl:param> I did not need to assign it to a parameter but I felt that if I were going to do anything more serious than Hello World like plotting points of interest on the map. I would need to dynamically build the URL, so using a set of parameters or variables that I then concatenated would be easier. Now I had the initial server string and the request all I then did was combine the two using a concat: concat($mURL,$pXML) Embedding that into an image tag: <fo:external-graphic src="url({concat($mURL,$pXML)})"/> and I was done. Notice the curly braces to get the concat evaluated prior to the image call. As you will see next time, building the XML message to go onto the URL can get quite complex but I have used it with some data. Ultimately, it would be easier to build an extension to BIP to handle the data to be plotted, it would then build the XML message, call mapviewer and return a URL to the map image for BIP to render. More on that next time ...

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • How to Customize Fonts and Colors for Gnome Panels in Ubuntu Linux

    - by The Geek
    Earlier this week we showed you how to make the Gnome Panels totally transparent, but you really need some customized fonts and colors to make the effect work better. Here’s how to do it. This article is the first part of a multi-part series on how to customize the Ubuntu desktop, written by How-To Geek reader and ubergeek, Omar Hafiz. Changing the Gnome Colors the Easy Way You’ll first need to install Gnome Color Chooser which is available in the default repositories (the package name is gnome-color-chooser). Then go to System > Preferences > Gnome Color Chooser to launch the program. When you see all these tabs you immediately know that Gnome Color Chooser does not only change the font color of the panel, but also the color of the fonts all over Ubuntu, desktop icons, and many other things as well. Now switch to the panel tab, here you can control every thing about your panels. You can change font, font color, background and background color of the panels and start menus. Tick the “Normal” option and choose the color you want for the panel font. If you want you can change the hover color of the buttons on the panel by too. A little below the color option is the font options, this includes the font, font size, and the X and Y positioning of the font. The first two options are pretty straight forward, they change the typeface and the size. The X-Padding and Y-Padding may confuse you but they are interesting, they may give a nice look for your panels by increasing the space between items on your panel like this: X-Padding:   Y-Padding:   The bottom half of the window controls the look of your start menus which is the Applications, Places, and Systems menus. You can customize them just the way you did with the panel.   Alright, this was the easy way to change the font of your panels. Changing the Gnome Theme Colors the Command-Line Way The other hard (not so hard really) way will be changing the configuration files that tell your panel how it should look like. In your Home Folder, press Ctrl+H to show the hidden files, now find the file “.gtkrc-2.0”, open it and insert this line in it. If there are any other lines in the file leave them intact. include “/home/<username>/.gnome2/panel-fontrc” Don’t forget to replace the <user_name> with you user account name. When done close and save the file. Now navigate the folder “.gnome2” from your Home Folder and create a new file and name it “panel-fontrc”. Open the file you just created with a text editor and paste the following in it: style “my_color”{fg[NORMAL] = “#FF0000”}widget “*PanelWidget*” style “my_color”widget “*PanelApplet*” style “my_color” This text will make the font red. If you want other colors you’ll need to replace the Hex value/HTML Notation (in this case #FF0000) with the value of the color you want. To get the hex value you can use GIMP, Gcolor2 witch is available in the default repositories or you can right-click on your panel > Properties > Background tab then click to choose the color you want and copy the Hex value. Don’t change any other thing in the text. When done, save and close. Now press Alt+F2 and enter “killall gnome-panel” to force it to restart or you can log out and login again. Most of you will prefer the first way of changing the font and color for it’s ease of applying and because it gives you much more options but, some may not have the ability/will to download and install a new program on their machine or have reasons of their own for not to using it, that’s why we provided the two way. Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Splendiferous Array of Culinary Tools [Infographic] Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker

    Read the article

  • Ubutu 14.04 triple screen, third screen black and X cursor

    - by Horse
    I am having some issues getting my third screen working properly. I had triple screens working fine on 12.04, using 2 nvidia cards. Did a fresh install of 14.04 and having no end of problems getting it working. It either will just be disabled, or the screen is black with the cursor as an X. I can only enable it from the nvidia server settings tool. The Ubuntu native display settings won't even show the 3rd screen. I tried copying the xorg.conf from my old install, which upon restarting X worked fine on the login screen, but then it just sat there after I logged in and didn’t do anything (mouse was still working). I am using gnome-session-fallback instead of unity if that makes any difference. Still having these issues if I try unity though. How do I get my 3rd screen working and displaying a desktop? Here is my current xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 331.20 (buildd@roseapple) Mon Feb 3 15:07:22 UTC 2014 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: unknown, VertRefresh source: unknown Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 0.0 - 0.0 VertRefresh 0.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0, DVI-I-3: nvidia-auto-select +1280+0" # Removed Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0" # Removed Option "SLI" "Off" # Removed Option "BaseMosaic" "off" # Removed Option "metamodes" "GPU-109d4eb8-b40b-87d7-3fd6-95830d1d5215.DVI-I-2: nvidia-auto-select +0+0, GPU-109d4eb8-b40b-87d7-3fd6-95830d1d5215.DVI-I-3: nvidia-auto-select +1280+0, GPU-82e96214-175e-5e6a-218c-5bdbc948daf2.DVI-I-1: nvidia-auto-select +3200+0" # Removed Option "SLI" "off" # Removed Option "BaseMosaic" "on" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0, DVI-I-3: nvidia-auto-select +1280+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "metamodes" "nvidia-auto-select +0+0" # Removed Option "metamodes" "DVI-I-3: nvidia-auto-select +0+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Here is my old 'working in 12.04' xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 310.19 ([email protected]) Thu Nov 8 02:08:55 PST 2012 Section "ServerLayout" # Removed Option "Xinerama" "0" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen2" Screen 2 "Screen2" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: unknown, VertRefresh source: unknown Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "Apple Cinema HD" HorizSync 74.0 - 74.6 VertRefresh 59.9 - 60.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0, DFP-2: nvidia-auto-select +1280+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select +0+0 {viewportout=1280x720+0+152}" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • Five development tools I can't live without

    - by bconlon
    When applying to join Geeks with Blogs I had to specify the development tools I use every day. That got me thinking, it's taken a long time to whittle my tools of choice down to the selection I use, so it might be worth sharing. Before I begin, I appreciate we all have our preferred development tools, but these are the ones that work for me. Microsoft Visual Studio Microsoft Visual Studio has been my development tool of choice for more years than I care to remember. I first used this when it was Visual C++ 1.5 (hats off to those who started on 1.0) and by 2.2 it had everything I needed from a C++ IDE. Versions 4 and 5 followed and if I had to guess I would expect more Windows applications are written in VC++ 6 and VB6 than any other language. Then came the not so great versions Visual Studio .Net 2002 (7.0) and 2003 (7.1). If I'm honest I was still using v6. 2005 was better and 2008 was simply brilliant. Everything worked, the compiler was super fast and I was happy again...then came 2010...oh dear. 2010 is a big step backwards for me. It's not encouraging for my upcoming WPF exploits that 2010 is fronted in WPF technology, with the forever growing Find/Replace dialog, the issues with C++ intellisense, and the buggy debugger. That said it is still my tool of choice but I hope they sort the issue in SP1. I've tried other IDEs like Visual Age and Eclipse, but for me Visual Studio is the best. A really great tool. Liquid XML Studio XML development is a tricky business. The W3C standards are often difficult to get to the bottom of so it's great to have a graphical tool to help. I first used Liquid Technologies 5 or 6 years back when I needed to process XML data in C++. Their excellent XML Data Binding tool has an easy to use Wizard UI (as compared to Castor or JAXB command line tools) and allows you to generate code from an XML Schema. So instead of having to deal with untyped nodes like with a DOM parser, instead you get an Object Model providing a custom API in C++, C#, VB etc. More recently they developed a graphical XML IDE with XML Editor, XSLT, XQuery debugger and other XML tools. So now I can develop an XML Schema graphically, click a button to generate a Sample XML document, and click another button to run the Wizard to generate code including a Sample Application that will then load my Sample XML document into the generated object model. This is a very cool toolset. Note: XML Data Binding is nothing to do with WPF Data Binding, but I hope to cover both in more detail another time. .Net Reflector Note: I've just noticed that starting form the end of February 2011 this will no longer be a free tool !! .Net Reflector turns .Net byte code back into C# source code. But how can it work this magic? Well the clue is in the name, it uses reflection to inspect a compiled .Net assembly. The assembly is compiled to byte code, it doesn't get compiled to native machine code until its needed using a just-in-time (JIT) compiler. The byte code still has all of the information needed to see classes, variables. methods and properties, so reflector gathers this information and puts it in a handy tree. I have used .Net Reflector for years in order to understand what the .Net Framework is doing as it sometimes has undocumented, quirky features. This really has been invaluable in certain instances and I cannot praise enough kudos on the original developer Lutz Roeder. Smart Assembly In order to stop nosy geeks looking at our code using a tool like .Net Reflector, we need to obfuscate (mess up) the byte code. Smart Assembly is a tool that does this. Again I have used this for a long time. It is very quick and easy to use. Another excellent tool. Coincidentally, .Net Reflector and Smart Assembly are now both owned by Red Gate. Again kudos goes to the original developer Jean-Sebastien Lange. TortoiseSVN SVN (Apache Subversion) is a Source Control System developed as an open source project. TortoiseSVN is a graphical UI wrapper over SVN that hooks into Windows Explorer to enable files to be Updated, Committed, Merged etc. from the right click menu. This is an essential tool for keeping my hard work safe! Many years ago I used Microsoft Source Safe and I disliked CVS type systems. But TortoiseSVN is simply the best source control tool I have ever used. --- So there you have it, my top 5 development tools that I use (nearly) every day and have helped to make my working life a little easier. I'm sure there are other great tools that I wish I used but have never heard of, but if you have not used any of the above, I would suggest you check them out as they are all very, very cool products. #

    Read the article

  • Cookbook: SES and UCM setup

    - by George Maggessy
    The purpose of this post is to guide you setting up the integration between UCM and SES. On my next post I’ll show different approaches to integrate WebCenter Portal, UCM and SES based on some common scenarios. Let’s get started. WebCenter Content Configuration WebCenter Content has a component that adds functionality to the content server to allow it to be searched via the Oracle SES. To enable the component installation, go to Administration -&gt; Admin Server and select SESCrawlerExport. Click the update button and restart UCM_server1 managed server. Once the managed server is back, we’ll configure the component. In the menu, under Administration you should see SESCrawlerExport. Click on the link. You’ll see the window below. Click on Configure SESCrawlerExport. Configure the values below: Hostname: SES hostname. Feed Location: Directory where data feeds will be saved. Metadata List: List of metadata that will be searchable by SES. After updating the values click on the Update button. Come back to the SESCrawlerExport Administration UI and click on Take Snapshot button. It will create the data feeds in the specified Feed Location. To check if the correct configuration was done, please access the following URL http://&lt;ucm_server&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWLOAD_CONFIG&amp;source=default. It should download config file in the format below: &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;rsscrawler xmlns="http://xmlns.oracle.com/search/rsscrawlerconfig"&gt; &lt;feedLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONTROL&amp;source=default]]&gt;&lt;/feedLocation&gt; &lt;errorFileLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_STATUS&amp;IsJava=1&amp;source=default&amp;StatusFeed=]]&gt;&lt;/errorFileLocation&gt; &lt;feedType&gt;controlFeed&lt;/feedType&gt; &lt;sourceName&gt;default&lt;/sourceName&gt; &lt;securityType&gt;attributeBased&lt;/securityType&gt; &lt;securityAttribute name="Account" grant="true"/&gt; &lt;securityAttribute name="DocSecurityGroup" grant="true"/&gt; &lt;securityAttribute name="Collab" grant="true"/&gt; &lt;/rsscrawler&gt; Make sure Account and DocSecurityGroup values are true. SES Configuration Let’s start by configuring the Identity Plug-ins in SES. Go to Global Settings -&gt; System -&gt; Identity Management Setup. Select Oracle Content Server and click the Activate button. We’ll populate the following values: HTTP endpoint for authentication: URL to WebCenter Content. Notice that /cs/idcplg was added at the end of the URL. Admin User: UCM Admin user. This user must have access to all CPOE content. Password: Password to Admin user. Authentication Type: NATIVE. Go back to the Home tab and click on Sources on the top left. Select Oracle Content Server on the right and click the Create button. Configuration URL: URL that point to the configuration file. Example: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONFIG&amp;source=default. User ID: UCM Admin user. Password: Password to Admin user. Click on the Authorization tab and add the appropriate values to the fields below. Make sure you see the ACCOUNT and DOCSECURITYGROUP security attributes at the end of the page. HTTP endpoint for authorization: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg. Display URL prefix: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs. Administrator user: UCM Admin user. Administrator password. On the Document Types tab, add the documents that should be indexed by SES. As our last step, we’ll configure the Federation Trusted Entities under Global Settings. Entity Name: The user must be present in both the identity management server configured for your WebCenter application and the identity management server configured for Oracle SES. For instance, I used weblogic in my sample. Password: Entity user password.\ Now you are ready to test the integration on the SES UI: http://&lt;ses hostname&gt;:&lt;port&gt;/search/query/.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #004

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Auto Generate Script to Delete Deprecated Fields in Current Database In early career everytime I have to drop a column, I had hard time doing it because I was scared what if that column was needed somewhere in the code. Due to this fear I never dropped any column. I just renamed the column. If the column which I renamed was needed afterwards it was very easy to rename it back again. However, it is not recommended to keep the deleted column renamed in the database. At every interval I used to drop the columns which was prefixed with specific word. This script is 6 years old but still works. Give it a look, I am open for improvements. 2007 Shrinking Truncate Log File – Log Full – Part 2 Shrinking database or mdf file is indeed bad thing and it creates lots of problems. However, once in a while there is legit requirement to shrink the log file – a very rare one. In the rare occasion shrinking or truncating the log file may be the only solution. However, one should make sure to take backup before and after the truncate or shrink as in case of a disaster they can be very useful. Remember that truncating log file will break the log chain and while restore it can create major issue. Anyway, use this feature with caution. 2008 Simple Use of Cursor to Print All Stored Procedures of Database Including Schema This is a very interesting requirement I used to face in my early career days, I needed to print all the Stored procedures of my database. Interesting enough I had written a cursor to do so. Today when I look back at this stored procedure, I believe there will be a much cleaner way to do the same task, however, I still use this SP quite often when I have to document all the stored procedures of my database. Interesting Observation about Order of Resultset without ORDER BY In industry many developers avoid using ORDER BY clause to display the result in particular order thinking that Index is enforcing the order. In this interesting example, I demonstrate that without using ORDER BY, same table and similar query can return different results. Query optimizer always returns results using any method which is optimized for performance. The learning is There is no order unless ORDER BY is used. 2009 Size of Index Table – A Puzzle to Find Index Size for Each Index on Table I asked this puzzle earlier where I asked how to find the Index size for each of the tables. The puzzle was very well received and lots of interesting answers were received. To answer this question I have written following blog posts. I suggest this weekend you try to solve this problem and see if you can come up with a better solution. If not, well here are the solutions. Solution 1 | Solution 2 | Solution 3 Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. The SQL Server Query optimizer is a very smart tool and it makes a better selection of execution plan. Suggesting hints to the Query Optimizer should be attempted when absolutely necessary and by experienced developers who know exactly what they are doing (or in development as a way to experiment and learn). Interesting Observation – TOP 100 PERCENT and ORDER BY I have seen developers and DBAs using TOP very causally when they have to use the ORDER BY clause. Theoretically, there is no need of ORDER BY in the view at all. All the ordering should be done outside the view and view should just have the SELECT statement in it. It was quite common that to save this extra typing by including ordering inside of the view. At several instances developers want a complete resultset and for the same they include TOP 100 PERCENT along with ORDER BY, assuming that this will simulate the SELECT statement with ORDER BY. 2010 SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience In year 2010 I attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever. Instead of writing about my usual routine or the event, I wrote about the interesting things I did and how I felt about it! When I go back and read it, I feel that this is the best event I attended in year 2010. Change Database Access to Single User Mode Using SSMS Image says all. 2011 SQL Server 2012 has introduced new analytic functions. These functions were long awaited and I am glad that they are now here. Before when any of this function was needed, people used to write long T-SQL code to simulate these functions. But now there’s no need of doing so. Having available native function also helps performance as well readability. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • JavaDay Taipei 2014 Trip Report

    - by reza_rahman
    JavaDay Taipei 2014 was held at the Taipei International Convention Center on August 1st. Organized by Oracle University, it is one of the largest Java developer events in Taiwan. This was another successful year for JavaDay Taipei with a fully sold out venue packed with youthful, energetic developers (this was my second time at the event and I have already been invited to speak again next year!). In addition to Oracle speakers like me, Steve Chin and Naveen Asrani, the event also featured a bevy of local speakers including Taipei Java community leaders. Topics included Java SE, Java EE, JavaFX, cloud and Big Data. It was my pleasure and privilege to present one of the opening keynotes for the event. I presented my session on Java EE titled "JavaEE.Next(): Java EE 7, 8, and Beyond". I covered the changes in Java EE 7 as well as what's coming in Java EE 8. I demoed the Cargo Tracker Java EE BluePrints. I also briefly talked about Adopt-a-JSR for Java EE 8. The slides for the keynote are below (click here to download and view the actual PDF): It appears your Web browser is not configured to display PDF files. No worries, just click here to download the PDF file. In the afternoon I did my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The talk was completely packed. The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. I finished off Java Day Taipei with my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE" (this was the last session of the conference). The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. After the event the Oracle University folks hosted a reception in the evening which was very well attended by organizers, speakers and local Java community leaders. I am extremely saddened by the fact that this otherwise excellent trip was scarred by terrible tragedy. After the conference I joined a few folks for a hike on the Maokong Mountain on Saturday. The group included friends in the Taiwanese Java community including Ian and Robbie Cheng. Without warning, fatal tragedy struck on a remote part of the trail. Despite best efforts by us, the excellent Taiwanese Emergency Rescue Team and World class Taiwanese physicians we were unable to save our friend Robbie Cheng's life. Robbie was just thirty-four years old and is survived by his younger brother, mother and father. Being the father of a young child myself, I can only imagine the deep sorrow that this senseless loss unleashes. Robbie was a key member of the Taiwanese Java community and a Java Evangelist at Sun at one point. Ironically the only picture I was able to take of the trail was mere moments before tragedy. I thought I should place him in that picture in profoundly respectful memoriam: Perhaps there is some solace in the fact that there is something inherently honorable in living a bright life, dying young and meeting one's end on a beautiful remote mountain trail few venture to behold let alone attempt to ascend in a long and tired lifetime. Perhaps I'd even say it's a fate I would not entirely regret facing if it were my own. With that thought in mind it seems appropriate to me to quote some lyrics from the song "Runes to My Memory" by legendary Swedish heavy metal band Amon Amarth idealizing a fallen Viking warrior cut down in his prime: "Here I lie on wet sand I will not make it home I clench my sword in my hand Say farewell to those I love When I am dead Lay me in a mound Place my weapons by my side For the journey to Hall up high When I am dead Lay me in a mound Raise a stone for all to see Runes carved to my memory" I submit my deepest condolences to Robbie's family and hope my next trip to Taiwan ends in a less somber note.

    Read the article

  • Null reading in stream images? Unable to start activity ComponentInfo

    - by lasmith
    I have reviewed a lot of similar questions regarding not being able to launch an activity but they don't seem to quite match my problem. I am working on a simple black jack game but its force quitting. I suspect there is a problem with loading up the card png images I have. Stepping through the debugger it crashes right while in the resetGame() function. I'm sure I am doing something dumb. My Logcat: 10-15 20:21:43.309: E/AndroidRuntime(2863): FATAL EXCEPTION: main 10-15 20:21:43.309: E/AndroidRuntime(2863): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.smith.blackjack/com.smith.blackjack.Main}: java.lang.NullPointerException 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2059) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2084) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.access$600(ActivityThread.java:130) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1195) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.os.Handler.dispatchMessage(Handler.java:99) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.os.Looper.loop(Looper.java:137) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.main(ActivityThread.java:4745) 10-15 20:21:43.309: E/AndroidRuntime(2863): at java.lang.reflect.Method.invokeNative(Native Method) 10-15 20:21:43.309: E/AndroidRuntime(2863): at java.lang.reflect.Method.invoke(Method.java:511) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) 10-15 20:21:43.309: E/AndroidRuntime(2863): at dalvik.system.NativeStart.main(Native Method) 10-15 20:21:43.309: E/AndroidRuntime(2863): Caused by: java.lang.NullPointerException 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.DeckOfCards.<init>(DeckOfCards.java:17) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.Main.resetGame(Main.java:98) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.Main.onCreate(Main.java:67) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.Activity.performCreate(Activity.java:5008) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1079) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2023) 10-15 20:21:43.309: E/AndroidRuntime(2863): ... 11 more My androidmanifest: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.smith.blackjack" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="11" android:targetSdkVersion="15" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".Main" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> Here is my Main.java package com.smith.blackjack; import android.os.Bundle; import android.app.Activity; import android.content.res.AssetManager; import android.graphics.drawable.Drawable; import java.io.IOException; import java.io.InputStream; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.ImageView; public class Main extends Activity { private ImageView dealerCard0; private ImageView dealerCard1; private ImageView dealerCard2; private ImageView dealerCard3; private ImageView playerCard0; private ImageView playerCard1; private ImageView playerCard2; private ImageView playerCard3; private ImageView imgResult; private Button btnDeal; private Button btnDraw; private Button btnHold; private DeckOfCards deckOfCards; private int[] dealerValues; private int dealerSum; private int dealerCardNumber; private int[] playerValues; private int playerSum; private int playerCardNumber; private InputStream dealerHiddenCard; private Card dealerCard; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); dealerCard0 = (ImageView) findViewById(R.id.dealerCard0); dealerCard1 = (ImageView) findViewById(R.id.dealerCard1); dealerCard2 = (ImageView) findViewById(R.id.dealerCard2); dealerCard3 = (ImageView) findViewById(R.id.dealerCard3); playerCard0 = (ImageView) findViewById(R.id.playerCard0); playerCard1 = (ImageView) findViewById(R.id.playerCard1); playerCard2 = (ImageView) findViewById(R.id.playerCard2); playerCard3 = (ImageView) findViewById(R.id.playerCard3); imgResult = (ImageView) findViewById(R.id.imgResult); btnDeal = (Button) findViewById(R.id.deal); btnDraw = (Button) findViewById(R.id.draw); btnHold = (Button) findViewById(R.id.hold); btnDeal.setOnClickListener(btnDealListener); btnDraw.setOnClickListener(btnDrawListener); btnHold.setOnClickListener(btnHoldListener); resetGame(); } private void resetGame(){ AssetManager assets = getAssets(); dealerValues = new int[4]; playerValues = new int[4]; dealerSum = 0; playerSum = 0; dealerCardNumber = 0; playerCardNumber = 0; for (int i = 0; i < 4; i++) { dealerValues[i] = 0; playerValues[i] = 0; } try { InputStream stream = assets.open("cardback.png"); // stream = assets.open("cardback.png"); Drawable cardImage = Drawable.createFromStream(stream, null); dealerCard0.setImageDrawable(cardImage); dealerCard1.setImageDrawable(cardImage); dealerCard2.setImageDrawable(cardImage); dealerCard3.setImageDrawable(cardImage); playerCard0.setImageDrawable(cardImage); playerCard1.setImageDrawable(cardImage); playerCard2.setImageDrawable(cardImage); playerCard3.setImageDrawable(cardImage); imgResult.setImageDrawable(cardImage); deckOfCards = new DeckOfCards(); deckOfCards.shuffle(); assets.close(); } catch (IOException e){ Log.e("Reset Game", "Error Loading", e); } } public OnClickListener btnDealListener = new OnClickListener() { // @Override public void onClick(View v) { try { AssetManager assets = getAssets(); InputStream stream; // first player card Card newCard; newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); Drawable cardImage = Drawable.createFromStream(stream, newCard.File); playerCard0.setImageDrawable(cardImage); assets.close(); // second player card newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); playerCard1.setImageDrawable(cardImage); assets.close(); // first dealer card hidden newCard = deckOfCards.dealCard(); dealerCard = newCard; dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; dealerHiddenCard = assets.open(newCard.File); stream = assets.open("cardback.png"); cardImage = Drawable.createFromStream(stream, "cardback"); dealerCard0.setImageDrawable(cardImage); assets.close(); // second dealer card open newCard = deckOfCards.dealCard(); dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); dealerCard1.setImageDrawable(cardImage); assets.close(); } catch (IOException e){ Log.e("Deal", "Error Loading", e); } }; }; public OnClickListener btnDrawListener = new OnClickListener() { // @Override public void onClick(View v) { try { AssetManager assets = getAssets(); InputStream stream; // get next player card Card newCard; newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); Drawable cardImage = Drawable.createFromStream(stream, newCard.File); switch (playerCardNumber){ case 3: playerCard2.setImageDrawable(cardImage); case 4: playerCard3.setImageDrawable(cardImage); } assets.close(); } catch (IOException e){ Log.e("Draw", "Error Loading", e); } }; }; public OnClickListener btnHoldListener = new OnClickListener() { // @Override public void onClick(View v) { Drawable cardImage; // evaluate player hand playerSum = evaluate(playerValues); if (playerSum > 21){ // player losses } // flip over the dealer hidden card cardImage = Drawable.createFromStream(dealerHiddenCard, dealerCard.File); Card newCard; InputStream stream; AssetManager assets = getAssets(); for (int i=2; i<4; i++){ dealerSum = evaluate(dealerValues); if (dealerSum < 16 ) { newCard = deckOfCards.dealCard(); dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; try { stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); switch (dealerCardNumber){ case 3: dealerCard2.setImageDrawable(cardImage); case 4: dealerCard3.setImageDrawable(cardImage); } assets.close(); } catch (IOException e){ Log.e("Draw", "Error Loading", e); } if (dealerSum < playerSum) { // player wins } if (dealerSum > playerSum){ // dealer wins } if (dealerSum == playerSum){ // it is a draw } } } }; }; public int evaluate (int[]values) { int sumCards = 0; for (int i = 0; i < 4; i++){ sumCards += values[i]; } if (sumCards > 21) { for (int i = 0; i < 4; i++){ if (values[i] == 11) { values[i] = 1; sumCards -= 10; continue; } } } return sumCards; } } My DeckOfCards class: package com.smith.blackjack; import java.util.Random; public class DeckOfCards { private Card [] deck; private int currentCard; private static final int NUMBER_OF_CARDS = 52; private static final Random randomNumbers = new Random(); public DeckOfCards () { deck = new Card[NUMBER_OF_CARDS]; currentCard = 0 ; for(int count = 0; count < deck.length; count++) { deck[count].faceValue = count + 1; } } public void shuffle () { currentCard = 0; for (int first = 0; first < deck.length; first ++){ int second = randomNumbers.nextInt(NUMBER_OF_CARDS); int temp = deck[first].faceValue; deck[first].faceValue=deck[second].faceValue; deck[second].faceValue = temp; } } public Card dealCard(){ Card temp = new Card(); temp.faceValue = 0; temp.File = ""; if(currentCard < deck.length) { temp.faceValue = deck[currentCard].faceValue / 4; int suit = deck[currentCard].faceValue % 4; String suitString = ""; switch (suit){ case 0: suitString = "c"; case 1: suitString = "d"; case 2: suitString = "h"; case 3: suitString = "s"; } Integer face = temp.faceValue / 4 ; String faceString = face.toString(); temp.File = faceString + suitString + ".png"; switch (temp.faceValue){ case 11: temp.faceValue = 10; case 12: temp.faceValue = 10; case 13: temp.faceValue = 10; } return temp; } else return temp; } }

    Read the article

  • Synergy - easy share of keyboard and mouse between multiple computers

    Did you ever have the urge to share one set of keyboard and mouse between multiple machines? If so, please read on... Using multiple machines Honestly, as a software craftsman it is my daily business to run multiple machines - either physical or virtual - to be able to solve my customers' requirements. Recent hardware equipment allows this very easily. For laptops it's a no-brainer to attach a second or even a third screen in order to extend your native display. This works quite handy and in my case I used to attached two additional screens - one via HD15 connector, the other via HDMI. But... as it's a laptop and therefore a mobile unit there are slight restrictions. Detaching and re-attaching all cables when changing locations is one of them but hardware limitations, too. After all, it's a laptop and not a workstation. I guess, that anyone working in IT (or ICT) has more than one machine at their workplace or their home office and at least I find it quite annoying to have multiple sets of keyboard and mouse conquering my remaining space on my desk. Despite the ugly looks of all those cables and whatsoever 'chaos of distraction' I prefer a more clean solution and working environment. This allows me to actually focus on my work and tasks to do rather than to worry about choosing the right combination of keyboard/mouse. My current workplace is a patch work of various pieces of hardware (approx. 2-3 years): DIY desktop on Ubuntu 12.04 64-bit, Core2 Duo (E7400, 2.8GHz), 4GB RAM, 2x 250GB HDD, nVidia GPU 512MB Dell Inspiron 1525 on Windows 8 64-bit, 4GB RAM, 200GB HDD HP Compaq 6720s on Windows Vista 32-bit, Core2 Duo (T5670, 1.8GHz), 2GB RAM, 160GB HDD Mac mini on Mac OS X 10.7, Core i5 (2.3 GHz), 2GB RAM, 500GB HDD I know... Not the latest and greatest but a decent combination to work with. New system(s) is/are already on the shopping list but I live in the 'wrong' country to buy computer hardware. So, the next trip abroad will provide me with some new stuff. Using multiple operating systems The list of hardware above already names different operating systems, and actually I have only one preference: Linux. But still my job as a software craftsman for Visual FoxPro and .NET development requires other OSes, too. Not a big deal, it's just like this. Additionally to those physical machines, there are a bunch of virtual machines around. Most of them running either Windows XP or Windows 7. Since years I have the practice that each development for one customer is isolated into its own virtual machine and environment. This keeps it clean and version-safe. But as you can easily imagine with that setup there are a couple of constraints referring to keyboard and mouse. Usually, those systems require their own pieces of hardware attached. As stated, I don't like clutter on my desk's surface, so a cross-platform solution has to come in here. In the past, I tried it with various applications, hardware or network protocols like X11, RDP, NX, TeamViewer, RAdmin, KVM switch, etc. but the problem in this case is that they either allow you to remotely connect to the other system or exclusively 'bind' your peripherals to the active system. Not optimal after all. Synergy to the rescue Quote from their website: "Synergy lets you easily share your mouse and keyboard between multiple computers on your desk, and it's Free and Open Source. Just move your mouse off the edge of one computer's screen on to another. You can even share all of your clipboards. All you need is a network connection. Synergy is cross-platform (works on Windows, Mac OS X and Linux)." Yep, that's it! All I need for my setup here... Actually, I couldn't believe it myself that I didn't stumble over synergy earlier but 'Get over it' and there we go. And despite the fact that it is Open Source, no, it's also for free. Donations for the developers are very welcome and recently they introduced Synergy Premium. A possibility to buy so-called premium votes that can be used to put more weight / importance on specific issues or bugs that you would like the developers to look into. Installation and configuration Simply download the installation packages for your systems of choice, run the installer and enter some minor information about your network setup. I chose my desktop machine for the role of the Synergy server and configured my screen setup as follows: The screen setup allows you currently to build or connect up to 15 machines. The number of screens can be higher as those machine might have multiple screens physically attached. Synergy takes this into the overall calculations and simply works as expected. I tried it for fun with a second monitor each connected to both laptops to have a total number of 6 active screens. No flaws after all - stunning! All the other machines are configured as clients like so: Side note: The screenshot was taken on Windows 8 and pasted via clipboard into Gimp running on Ubuntu. Resume Synergy is now definitely in my box of tools for my daily work, and amongst the first pieces of software I install after the operating system. It just simplifies my life and cleans my desk. Never again without Synergy!Now, only waiting for an Android version to integrate my Galaxy Tab 10.1, too. ;-) Please, check out that superb product and enjoy sharing one keyboard, one mouse and one clipboard between your various machines and operating systems.

    Read the article

  • Windows Phone 8 Announcement

    - by Tim Murphy
    As if the Surface announcement on Monday wasn’t exciting enough, today Microsoft announce that Windows Phone 8 will be coming this fall.  That itself is great news, but the features coming were like confetti flying in all different directions.  Given this speed I couldn’t capture every feature they covered.  A summary of what I did capture is listed below starting with their eight main features. Common Core The first thing that they covered is that Windows Phone 8 will share a core OS with Windows 8.  It will also run natively on multiple cores.  They mentioned that they have run it on up to 64 cores to this point.  The phones as you might expect will at least start as dual core.  If you remember there were metrics saying that Windows Phone 7 performed operations faster on a single core than other platforms did with dual cores.  The metrics they showed here indicate that Windows Phone 8 runs faster on comparable dual core hardware than other platforms. New Screen Resolutions Screen resolution has never been an issue for me, but it has been a criticism of Windows Phone 7 in the media.  Windows Phone 8 will supports three screen resolutions: WVGA 800 x 480, WXGA 1280 x 768, and 720 1280x720.  Hopefully this makes pixel counters a little happier. MicroSD Support This was one of my pet peeves when I got my Samsung Focus. With Windows Phone 8 the operating system will support adding MicroSD cards after initial setup.  Of course this is dependent on the hardware company on implementing it, but I think we have seen that even feature phone manufacturers have not had a problem supporting this in the past. NFC NFC has been an anticipated feature for some time.  What Microsoft showed today included the fact that they didn’t just want it to be for the phone.  There is cross platform NFC functionality between Windows Phone 8 and Windows 8.  The demos , while possibly a bit fanciful, showed would could be achieved even in a retail environment.  We are getting closer and closer to a Minority Report world with these technologies. Wallet Windows Phone 8 isn’t the first platform to have a wallet concept.  What they have done to differentiate themselves is to make it sot that it is not dependent on a SIM type chip like other platforms.  They have also expanded the concept beyond just banks to other types of credits such as airline miles. Nokia Mapping People have been envious of the Lumia phones having the Nokia mapping software.  Now all Windows Phone 8 devices will use NavTeq data and will have the capability to run in an offline fashion.  This is a major step forward from the Bing “touch for the next turn” maps. IT Administration The lack of features for enterprise administration and deployment was a complaint even before the Windows Phone 7 was released.  With the Windows Phone 8 release such features as Bitlocker and Secure boot will be baked into the OS. We will also have the ability to privately sign and distribute applications. Changing Start Screen Joe Belfiore made a big deal about this aspect of the new release.  Users will have more color themes available to them and the live tiles will be highly customizable. You will have the ability to resize and organize the tiles in a more dynamic way.  This allows for less important tiles or ones with less information to be made smaller.  And There Is More So what other tidbits came out of the presentation?  Later this summer the API for WP8 will be available.  There will be developer events coming to a city near you.  Another announcement of interest to developers is the ability to write applications at a native code level.  This is a boon for game developers and those who need highly efficient applications. As a topper on the cake there was mention of in app payment. On the consumer side we also found out that all updates will be available over the air.  Along with this came the fact that Microsoft will support all devices with updates for at least 18 month and you will be able to subscribe for early updates.  Update coming for Windows Phone 7.5 customers to WP7.8.  The main enhancement will be the new live tile features.  The big bonus is that the update will bypass the carriers.  I would assume though that you will be brought up to date with all previous patches that your carrier may not have released. There is so much more, but that is enough for one post.  Needless to say, EXCITING! del.icio.us Tags: Windows Phone 8,WP8,Windows Phone 7,WP7,Announcements,Microsoft

    Read the article

  • Trouble with Samba Domain

    - by Arkevius
    I'm having a bit of trouble setting up this Samba domain correctly. I'm getting an Access Denied error when trying to add a Windows XP machine to the domain. I'll go through my scenario in detail, but for those of you wanting a TLDR summary it'll be at the bottom of this post. I have HP Proliant server with Ubuntu 12.04 LTS installed. For this particular environment, I need this server to act as a PDC, file server, and print server. I began by updating and upgrading the packages (of course). Then went to install samba, gnome-desktop, wine, and cpanm. Samba was, of course, for the PDC and file/print services. The GUI was needed because a certain software has to be installed on there that needs a GUI. Wine was needed because the software is Windows-native. And cpanm was for a perl script I have running. For Samba, I went into the smb.conf file and enabled domain logons, changed the workgroup/domain name, the logon script for a per-group basis (netlogon/%g), enabled the netlogon and profiles share, and setup a couple of custom shares for the file service. The printer was added later, and seems to be working just fine. I then restarted the services, and used the net groupmap command to ensure my unix groups were mapped correctly to the Windows groups. After this, I went to a Windows box, and was able to successfully join the domain without a problem. After some fidgeting with the software to get it running on the win boxes from the server (it's a records management system program, which stores it's database files on the server), I went to add another computer to the domain. But now it's saying Access Denied. Before when I had this trouble it was because I forgot to add the group "machines" so Samba could create machine accounts. Thinking this was the case, I manually created the machine account to test this theory. However, it would still give me an Access Denied error. That must mean it has something to do with permissions now, correct? I've been fighting with this server for the past two weeks. If it's not one thing that;s wrong, then it's something else completely different. This would be the third time I've actually reinstalled everything to start over. I'll post snippets of my system settings below. If anything else is needed, just say the word and I'll gather up the info. The unix group 'domadmin' is the Domain Admins group. Samba Administrator account administrator:x:1000:1000:Administrator,,,:/home/administrator:/bin/bash Adminstrator's groups administrator adm cdrom sudo dip plugdev lpadmin sambashare domadmin crimestar Samba's Configuration FIle (a snippet anyways) [global] workgroup = CITYPD server string = BPDServer dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user domain logons = yes logon path = \\%L\srv\samba\profiles\%U logon script = logon.bat add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u domain master = yes usershare allow guests = yes [netlogon] comment = Network Logon Service path = /srv/samba/netlogon/%g guest ok = yes read only = yes browseable = no [profiles] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no write list = root, @lpadmin [crimestar] comment = "Crimestar DB" path = /srv/crimestar/db valid users = @domadmin, @crimestar admin users = administrator writeable = yes guest ok = no browseable = no create mask = 0666 directory mask = 0777 [crimestarfiles] path = /home/administrator/.wine/drive_c/crimestar admin users = administrator browseable = yes ls -la on /srv/samba/profiles drwxrwxrwx 2 root machines 4096 Nov 21 15:27 . drwxr-xr-x 4 root root 4096 Nov 21 15:28 .. ls -la on /srv/samba/netlogon drwxr-xr-x 6 root root 4096 Nov 21 15:30 . drwxr-xr-x 4 root root 4096 Nov 21 15:28 .. drwxr-xr-x 2 root root 4096 Nov 21 15:30 crimestar drwxr-xr-x 2 root root 4096 Nov 21 18:13 domadmin drwxr-xr-x 3 root root 4096 Nov 21 15:30 guests drwxr-xr-x 2 root root 4096 Nov 21 15:29 users GrouMap list Domain Users (S-1-5-21-2978508755-2341913247-928297747-513) -> users Domain Admins (S-1-5-21-2978508755-2341913247-928297747-512) -> domadmin Domain Guests (S-1-5-21-2978508755-2341913247-928297747-514) -> nogroup TLDR I'm getting an Access Denied error message while trying to join a windows box to a samba domain, even after I successfully joined another computer without a problem. System settings / files are quoted above. Anyone have any ideas or suggestions?

    Read the article

  • data is not inserting in my db table [closed]

    - by Sarojit Chakraborty
    Please see my below(SubjectDetailsDao.java) code of addZoneToDb method. My debugger is nicely running upto ** session.getTransaction().commit();** code. but after that debugger stops,I do not know why it stops after that line? .And because of this i am unable to insert my data into my database table. I don't know what to do.Why it is not inserting my data into my database table? Plz help me for this. H Now i am getting this Error: Struts Problem Report Struts has detected an unhandled exception: Messages: org.hibernate.event.PreInsertEvent.getSource()Lorg/hibernate/event/EventSource; File: org/hibernate/validator/event/ValidateEventListener.java Line number: 172 Stacktraces java.lang.reflect.InvocationTargetException sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:601) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:122) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:179) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:94) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:235) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:89) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:130) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:267) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:126) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:138) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:165) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:179) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:176) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:52) org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:488) org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188) org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) java.lang.Thread.run(Thread.java:722) java.lang.NoSuchMethodError: org.hibernate.event.PreInsertEvent.getSource()Lorg/hibernate/event/EventSource; org.hibernate.validator.event.ValidateEventListener.onPreInsert(ValidateEventListener.java:172) org.hibernate.action.EntityInsertAction.preInsert(EntityInsertAction.java:156) org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:49) org.hibernate.engine.ActionQueue.execute(ActionQueue.java:250) org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:234) org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:141) org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298) org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338) org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106) v.esoft.dao.SubjectdetailsDAO.SubjectdetailsDAO.addZoneToDb(SubjectdetailsDAO.java:185) v.esoft.actions.LoginAction.datatobeinsert(LoginAction.java:53) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:601) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) ............................... ............................... SubjectDetailsDao.java(I have problem in addZoneToDb) package v.esoft.dao.SubjectdetailsDAO; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import org.hibernate.HibernateException; import org.hibernate.Query; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.criterion.Order; import com.opensymphony.xwork2.ActionSupport; import v.esoft.connection.HibernateUtil; import v.esoft.pojos.Subjectdetails; public class SubjectdetailsDAO extends ActionSupport { private static Session session = null; private static SessionFactory sessionFactory = null; static Transaction transaction = null; private String currentDate; SimpleDateFormat formatter1 = new SimpleDateFormat("yyyy-MM-dd"); private java.util.Date currentdate; public SubjectdetailsDAO() { sessionFactory = HibernateUtil.getSessionFactory(); SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd"); currentdate = new java.util.Date(); currentDate = formatter.format(currentdate); } public List getAllCustomTempleteRoutinesForGrid() { List list = new ArrayList(); try { session = sessionFactory.openSession(); list = session.createCriteria(Subjectdetails.class).addOrder(Order.desc("subjectId")).list(); } catch (Exception e) { System.out.println("Exepetion in getAllCustomTempleteRoutines" + e); } finally { try { // HibernateUtil.shutdown(); } catch (Exception e) { System.out.println("Exception In getExerciseListByLoginId Resource closing :" + e); } } return list; } //**showing list on grid private static List<Subjectdetails> custLst=new ArrayList<Subjectdetails>(); static int total=50; static { SubjectdetailsDAO cts = new SubjectdetailsDAO(); Iterator iterator1 = cts.getAllCustomTempleteRoutinesForGrid().iterator(); while (iterator1.hasNext()) { Subjectdetails get = (Subjectdetails) iterator1.next(); custLst.add(get); } } /****************************************update Routines List by WorkId************************************/ public int updatesub(Subjectdetails s) { int updated = 0; try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); Query query = session.createQuery("UPDATE Subjectdetails set subjectName = :routineName1 WHERE subjectId=:workoutId1"); query.setString("routineName1", s.getSubjectName()); query.setInteger("workoutId1", s.getSubjectId()); updated = query.executeUpdate(); if (updated != 0) { } transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { System.out.println("Exception in addUser() Rollback :" + e1); } } } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In addUser Resource closing :" + e); } } return updated; } /****************************************update Routines List by WorkId************************************/ public int addSubjectt(Subjectdetails s) { int inserted = 0; Subjectdetails ss=new Subjectdetails(); try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); ss. setSubjectName(s.getSubjectName()); session.save(ss); System.out.println("Successfully data insert in database"); inserted++; if (inserted != 0) { } transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { System.out.println("Exception in addUser() Rollback :" + e1); } } } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In addUser Resource closing :" + e); } } return inserted; } /******************************************Get all Routines List by LoginID************************************/ public List getSubjects() { List list = null; try { session = sessionFactory.openSession(); list = session.createCriteria(Subjectdetails.class).list(); } catch (Exception e) { System.out.println("Exception in getRoutineList() :" + e); } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In getUserList Resource closing :" + e); } } return list; } //---\ public int addZoneToDb(String countryName, Integer loginId) { int inserted = 0; try { System.out.println("---------1--------"); Session session = HibernateUtil.getSessionFactory().openSession(); System.out.println("---------2------session--"+session); session.beginTransaction(); Subjectdetails country = new Subjectdetails(countryName, loginId, currentdate, loginId, currentdate); System.out.println("---------2------country--"+country); session.save(country); System.out.println("-------after save--"); inserted++; session.getTransaction().commit(); System.out.println("-------after commits--"); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { } } } finally { try { } catch (Exception e) { } } return inserted; } //-- public int nextId() { return total++; } public List<Subjectdetails> buildList() { return custLst; } public static int count() { return custLst.size(); } public static List<Subjectdetails> find(int o,int q) { return custLst.subList(o, q); } public void save(Subjectdetails c) { custLst.add(c); } public static Subjectdetails findById(Integer id) { try { for(Subjectdetails c:custLst) { if(c.getSubjectId()==id) { return c; } } } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } public void update(Subjectdetails c) { for(Subjectdetails x:custLst) { if(x.getSubjectId()==c.getSubjectId()) { x.setSubjectName(c.getSubjectName()); } } } public void delete(Subjectdetails c) { custLst.remove(c); } public static List<Subjectdetails> findNotById(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()!=id) { temp.add(c); } } return temp; } public static List<Subjectdetails> findLesserAsId(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()<=id) { temp.add(c); } } return temp; } public static List<Subjectdetails> findGreaterAsId(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()>=id) { temp.add(c); } } return temp; } } Subjectdetails.hbm.xml <hibernate-mapping> <class name="vb.sofware.pojos.Subjectdetails" table="subjectdetails" catalog="vbsoftware"> <id name="subjectId" type="int"> <column name="subject_id" /> <generator class="increment"/> </id> <property name="subjectName" type="string"> <column name="subject_name" length="150" /> </property> <property name="createrId" type="java.lang.Integer"> <column name="creater_id" /> </property> <property name="createdDate" type="timestamp"> <column name="created_date" length="19" /> </property> <property name="updateId" type="java.lang.Integer"> <column name="update_id" /> </property> <property name="updatedDate" type="timestamp"> <column name="updated_date" length="19" /> </property> </class> </hibernate-mapping> My POJO - Subjectdetails.java package v.esoft.pojos; // Generated Oct 6, 2012 1:58:21 PM by Hibernate Tools 3.4.0.CR1 import java.util.Date; /** * Subjectdetails generated by hbm2java */ public class Subjectdetails implements java.io.Serializable { private int subjectId; private String subjectName; private Integer createrId; private Date createdDate; private Integer updateId; private Date updatedDate; public Subjectdetails( String subjectName) { //this.subjectId = subjectId; this.subjectName = subjectName; } public Subjectdetails() { } public Subjectdetails(int subjectId) { this.subjectId = subjectId; } public Subjectdetails(int subjectId, String subjectName, Integer createrId, Date createdDate, Integer updateId, Date updatedDate) { this.subjectId = subjectId; this.subjectName = subjectName; this.createrId = createrId; this.createdDate = createdDate; this.updateId = updateId; this.updatedDate = updatedDate; } public Subjectdetails( String subjectName, Integer createrId, Date createdDate, Integer updateId, Date updatedDate) { this.subjectName = subjectName; this.createrId = createrId; this.createdDate = createdDate; this.updateId = updateId; this.updatedDate = updatedDate; } public int getSubjectId() { return this.subjectId; } public void setSubjectId(int subjectId) { this.subjectId = subjectId; } public String getSubjectName() { return this.subjectName; } public void setSubjectName(String subjectName) { this.subjectName = subjectName; } public Integer getCreaterId() { return this.createrId; } public void setCreaterId(Integer createrId) { this.createrId = createrId; } public Date getCreatedDate() { return this.createdDate; } public void setCreatedDate(Date createdDate) { this.createdDate = createdDate; } public Integer getUpdateId() { return this.updateId; } public void setUpdateId(Integer updateId) { this.updateId = updateId; } public Date getUpdatedDate() { return this.updatedDate; } public void setUpdatedDate(Date updatedDate) { this.updatedDate = updatedDate; } } And my Sql query is CREATE TABLE IF NOT EXISTS `subjectdetails` ( `subject_id` int(3) NOT NULL, `subject_name` varchar(150) DEFAULT NULL, `creater_id` int(5) DEFAULT NULL, `created_date` datetime DEFAULT NULL, `update_id` int(5) DEFAULT NULL, `updated_date` datetime DEFAULT NULL, PRIMARY KEY (`subject_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >