Search Results

Search found 20933 results on 838 pages for 'jeff post'.

Page 791/838 | < Previous Page | 787 788 789 790 791 792 793 794 795 796 797 798  | Next Page >

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Java JRE 1.6.0_65 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    The latest Java Runtime Environment 1.6.0_65 (a.k.a. JRE 6u65-b14) and later updates on the JRE 6 codeline are now certified with Oracle E-Business Suite Release 11i and 12 for Windows-based desktop clients. Effects of new support dates on Java upgrades for EBS environments Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details: What does this mean for Oracle E-Business Suite users? Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? All JRE 6 and 7 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline, and from JRE 7u10 and later updates on the JRE 7 codeline.  We test all new JRE 1.6 and JRE 7 releases in parallel with the JRE development process, so all new JRE 1.6 and 7 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 or JRE 7 releases to your EBS users' desktops. What's new in in this Java release?Java 6 is now available only via My Oracle Support for E-Business Suite users.  You can find links to this release, including Release Notes, documentation, and the actual Java downloads here: All Java SE Downloads on MOS (Note 1439822.1) 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions. 32-bit JREs are certified on: Windows XP Service Pack 3 (SP3) Windows Vista Service Pack 1 (SP1) and Service Pack 2 (SP2) Windows 7 and Windows 7 Service Pack 1 (SP1) 64-bit JREs are certified only on 64-bit versions of Windows 7 and Windows 7 Service Pack 1 (SP1). Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22. Implications of Java 6 End of Public Updates for EBS Users The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates from February 2013 to the end of Java SE 6 Extended Support in June 2017. In other words, nothing changes for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6 until the end of Java SE 6 Extended Support in June 2017.  How can EBS customers obtain Java 6 updates after the public end-of-life? EBS customers can download Java 6 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see: All Java SE Downloads on MOS (Note 1439822.1) Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? This upgrade is highly recommended but remains optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  Java 6 is covered by Extended Support until June 2017.  All E-Business Suite customers must upgrade to JRE 7 by June 2017. Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 is highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What do Mac users need? Mac users running Mac OS 10.7 or 10.8 can run JRE 7 plug-ins.  See this article: EBS 12 certified with Mac OS X 10.7 and 10.8 with Safari 6 and JRE 7 Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? JRE is used for desktop clients.  JDK is used for application tier servers JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers.  Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms. JDK 7 is certified with E-Business Suite 12.  See: Java (JDK) 7 Certified for E-Business Suite 12 Servers References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • How do I stop XNA/Visual Studio from rebuilding my content project every time I build?

    - by Phil Quinn
    My group and I are working on a game in XNA 4.0 with Visual Studio 2010/2012. The main solution has 6 projects: 2 XNA game projects (1 executable/ 1 class library), 1 WPF executable for the level editor, 2 standard class libraries, and a content project. Originally, the editor and engine XNA game projects had a content reference to separate content projects. Recently, I consolidated the content projects into one to simplify asset additions. Since pushing these changes to our git repo, certain members of my group have been experiencing weird build issues. Every time they run the project, they have to re-build all of the assets. This happens regardless of whether any changes were made, even if they just run the project directly after building. I've taken a few steps to figure out why this is happening. Below is the MSBuild output set on Normal verbosity. The seemingly important part is at 4, with the line 4> Rebuilding all content because build settings have changed 1>------ Build started: Project: Engine.Core, Configuration: Debug x86 ------ 1>Build started 11/29/2012 3:24:24 AM. 1>ResolveAssemblyReferences: 1> A TargetFramework profile exclusion list will be generated. 1>EmbedXnaFrameworkRuntimeProfile: 1>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 1>GenerateTargetFrameworkMonikerAttribute: 1>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 1>CoreCompile: 1>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 1>XnaWriteCacheFile: 1>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 1>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 1> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 1>_CopyAppConfigFile: 1>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 1>CopyFilesToOutputDirectory: 1> Engine.Core -> <solution-dir>\src\Engine.Core\bin\x86\Debug\TimeSink.Engine.Core.dll 1> 1>Build succeeded. 1> 1>Time Elapsed 00:00:00.13 2>------ Build started: Project: TimeSink.Entities, Configuration: Debug x86 ------ 2>Build started 11/29/2012 3:24:25 AM. 2>ResolveAssemblyReferences: 2> A TargetFramework profile exclusion list will be generated. 2>EmbedXnaFrameworkRuntimeProfile: 2>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 2>GenerateTargetFrameworkMonikerAttribute: 2>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 2>CoreCompile: 2>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 2>XnaWriteCacheFile: 2>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 2>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 2> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 2>CopyFilesToOutputDirectory: 2> TimeSink.Entities -> <solution-dir>\src\TimeSink.Entities\bin\x86\Debug\TimeSink.Entities.dll 2> 2>Build succeeded. 2> 2>Time Elapsed 00:00:00.11 3>------ Build started: Project: Editor (Editor\Editor), Configuration: Debug x86 ------ 4>------ Build started: Project: Engine.Game, Configuration: Debug x86 ------ 3>Build started 11/29/2012 3:24:25 AM. 3>CoreCompile: 3> All content is already up to date 3>ResolveAssemblyReferences: 3> A TargetFramework profile exclusion list will be generated. 3>EmbedXnaFrameworkRuntimeProfile: 3>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 3>GenerateTargetFrameworkMonikerAttribute: 3>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 3>CoreCompile: 3>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 3>XnaWriteCacheFile: 3>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 3>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 3> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 3>_CopyOutOfDateNestedContentItemsToOutputDirectory: 3>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 3>CopyFilesToOutputDirectory: 3> Editor -> <solution-dir>\src\Editor\Editor\bin\x86\Debug\Editor.dll 3> 3>Build succeeded. 3> 3>Time Elapsed 00:00:00.39 4>Build started 11/29/2012 3:24:25 AM. 4>CoreCompile: 4> Rebuilding all content because build settings have changed 4> Building Textures\circle.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Importing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Building Textures\giroux.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Importing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Building Textures\Body_Neutral.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Importing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Building font.spritefont -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4> Importing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.FontDescriptionImporter 4> Processing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.Processors.FontDescriptionProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4>ResolveAssemblyReferences: 4> A TargetFramework profile exclusion list will be generated. 4>EmbedXnaFrameworkRuntimeProfile: 4>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 4>GenerateTargetFrameworkMonikerAttribute: 4>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 4>CoreCompile: 4>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 4>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 4> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 4>_CopyOutOfDateNestedContentItemsToOutputDirectory: 4>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 4>_CopyAppConfigFile: 4>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 4>CopyFilesToOutputDirectory: 4> Engine.Game -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Engine.Game.exe 4>IncrementalClean: 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\circle.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\giroux.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Body_Neutral.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\font.xnb". 4> 4>Build succeeded. 4> 4>Time Elapsed 00:00:01.72 ========== Build: 4 succeeded, 0 failed, 1 up-to-date, 0 skipped ========== I can't think of how build settings could change between consecutive executions. Like I said, this only happens for half our group. One member is on a 32-bit Windows 7 Prof bootcamp partition on a Mac. Everyone else, including those who don't have the issue, are running straight 64-bit Windows 7 Prof. Both have tried using VS 2010 and VS 2012. Any insight would be greatly appreciated. Also, I can post more details upon request if this isn't thorough enough.

    Read the article

  • Packaging Swing apps with integrated JavaFX content

    - by igor
    JavaFX provides a lot of interesting capabilities for developing rich client applications in Java, but what if you are working on an existing Swing application and you want to take advantage of these new features?  Maybe you want to use one or two controls like the LineChart or a MediaView.  Maybe you want to embed a large Scene Graph as an initial step in porting your application to FX.  A hybrid Swing/FX application might just be the answer. Developing a hybrid Swing + JavaFX application is not terribly difficult, but until recently the deployment of hybrid applications has not simple as a "pure" JavaFX application.  The existing tools focused on packaging FX Applications, or Swing applications - they did not account for hybrid applications. But with JavaFX 2.2 the tools include support for this hybrid application use case.  Solution  In JavaFX 2.2 we extended the packaging ant tasks to greatly simplify deploying hybrid applications.  You now use the same deployment approach as you would for pure JavaFX applications.  Just bundle your main application jar with the fx:jar ant task and then generate html/jnlp files using fx:deploy.  The only difference is setting toolkit attribute for the fx:application tag as shown below: <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/>  The value of ${main.class} in the example above is your application class which has a main method.  It does not need to extend JavaFX Application class. The resulting package provides support for the same set of execution modes as a package for a JavaFX application, although the packages which are created are not identical to the packages created for a pure FX application.  You will see two JNLP files generated in the case of a hybrid application - one for use from Swing applet and another for the webstart launch.  Note that these improvements do not alter the set of features available to Swing applications. The packaging tools just make it easier to use the advanced features of JavaFX in your Swing application. The same limits still apply, for example a Swing application can not use JavaFX Preloaders and code changes are necessary to support HTML splash screens. Why should I use the JavaFX ant tasks for packaging my Swing application?  While using FX packaging tool for a Swing application may seem like a mismatch at face value, there are some really good reasons to use this approach.  The primary justification for our packaging tools is to simplify the creation of your application artifacts, and to reduce manual errors.  Plus, no one should have to write JNLP by hand. Some specific benefits include: Your application jar will include a launcher program.  This improves your standalone launch by: checking for the JavaFX runtime guiding the user through any necessary installations setting the system proxy for Java The ant tasks will generate JNLP and HTML files for your swing app: avoids learning unnecessary details about JNLP, and eliminates the error-prone hand editing of JNLP files simplifies using advanced features like embedding JNLP and signing jars as BLOBs to improve launch performance.you can also embed the signing certificate details to improve the user's experience  allows the use of web page templates to inject the generated code directly into your actual web page instead of being forced to copy/paste the generated code snippets. What about native packing? Absolutely!  The very same ant task can generate a native bundle for a Swing application with JavaFX content.  Try running one of these sample native bundles for the "SwingInterop" FX example: exe and dmg.   I also used another feature on these examples: a click-through license agreement for .exe installers and OS X DMG drag installers. Small Caveat This packaging procedure is optimized around using the JavaFX packaging tools for your entire Swing application.  If you are trying to embed JavaFX content into existing project (with an existing build/packing process) then you may need to experiment in order to find the best way to integrate the JavaFX packaging steps into your existing build procedure. As long as you can use ant in your build process this should be a workable approach. It some cases solution could be less than ideal. For example, you need to use fx:jar to package your main jar file in order to produce a double-clickable jar or a native bundle.  The jar will be created from scratch, but you may already be creating the main jar file with a custom manifest.  This may lead to some redundant steps in your build process.  Hopefully the benefits will outweigh the problems. This is an area of ongoing development for the team, and we will continue to refine and improve both the tools and the process. Please share your experiences and suggestions with us.  You can comment here on the blog or file issues to JIRA. Sample code Here is the full ant code used to package SwingInterop.  You can grab latest JavaFX samples and try it yourself:  <target name="-post-jar"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <!-- Mark application as Swing-based --> <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/> <!-- Create doubleclickable jar file with embedded launcher --> <fx:jar destfile="${dist.jar}"> <fileset dir="${build.classes.dir}"/> <fx:application refid="swingFXApp" name="SwingInterop"/> <manifest> <attribute name="Implementation-Vendor" value="${application.vendor}"/> <attribute name="Implementation-Title" value="${application.title}"/> <attribute name="Implementation-Version" value="1.0"/> </manifest> </fx:jar> <!-- sign application jar. Use new self signed certificate --> <delete file="${build.dir}/test.keystore"/> <genkey alias="TestAlias" storepass="xyz123" keystore="${build.dir}/test.keystore" dname="CN=Samples, OU=JavaFX Dev, O=Oracle, C=US"/> <fx:signjar keystore="${build.dir}/test.keystore" alias="TestAlias" storepass="xyz123"> <fileset file="${dist.jar}"/> </fx:signjar> <!-- generate JNLPs, HTML and native bundles --> <fx:deploy width="960" height="720" includeDT="true" nativeBundles="all" outdir="${basedir}/${dist.dir}" embedJNLP="true" outfile="${application.title}"> <fx:application refId="swingFXApp"/> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="SwingInterop.jar"/> </fx:resources> <fx:permissions/> <info title="Sample app: ${application.title}" vendor="${application.vendor}"/> </fx:deploy> </target>

    Read the article

  • CodePlex Daily Summary for Tuesday, August 14, 2012

    CodePlex Daily Summary for Tuesday, August 14, 2012Popular ReleasesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.60: Allow for CSS3 grid-column and grid-row repeat syntax. Provide option for -analyze scope-report output to be in XML for easier programmatic processing; also allow for report to be saved to a separate output file.Diablo III Drop Statistics Service: 1.0: Client OnlyClosedXML - The easy way to OpenXML: ClosedXML 0.67.2: v0.67.2 Fix when copying conditional formats with relative formulas v0.67.1 Misc fixes to the conditional formats v0.67.0 Conditional formats now accept formulas. Major performance improvement when opening files with merged ranges. Misc fixes.Umbraco CMS: Umbraco 4.8.1: Whats newBug fixes: Fixed: When upgrading to 4.8.0, the database upgrade didn't run The changes to the <imaging> section in umbracoSettings.config caused errors when you didn't apply them during the upgrade. Defaults will now be used if any keys are missing Scheduled unpublishes now only unpublishes nodes set to published rather than newest Work item: 30937 - Fixed problem with FileHanderData in intranet environment in IE Work item: 3098 - Fix for null exception issue when using 'umbr...MySqlBackup.NET - MySQL Backup Solution for C#, VB.NET, ASP.NET: MySqlBackup.NET 1.4.4 Beta: MySqlBackup.NET 1.4.4 beta Fix bug: If the target database's default character set is not UTF8, UTF8 character will be encoded wrongly during Import. Now, database default character set will be recorded into Dump File at line of "SET NAMES". During import(restore), MySqlBackup will again detect and use the target database default character char set. MySqlBackup.NET 1.4.2 beta Fix bug: MySqlConnection is not closed when AutoCloseConnection set to true after Export or Import completed/halted. M...patterns & practices - Unity: Unity 3.0 for .NET 4.5 and WinRT - Preview: The Unity 3.0.1208.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. This is an updated version of the port after the .NET Framework 4.5 and Windows 8 have RTM'ed. Please see the Release Notes Providing feedback Post your feedback on the Unity forum Submit and vote on new features for Unity on our Uservoice site.LiteBlog (MVC): LiteBlog 1.31: Features of this release Windows8 styled UI Namespace and code refactoring Resolved the deployment issues in the previous release Added documentation Help file Help file is HTML based built using SandCastle Help file works in all browsers except IE10Self-Tracking Entity Generator for WPF and Silverlight: Self-Tracking Entity Generator v 2.0.0 for VS11: Self-Tracking Entity Generator for WPF and Silverlight v 2.0.0 for Entity Framework 5.0 and Visual Studio 2012Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.6.0: New Stuff ImageTile Control - think People Tile MicrophoneRecorder - Coding4Fun.Phone.Audio GzipWebClient - Coding4Fun.Phone.Net Serialize - Coding4Fun.Phone.Storage this is code I've written countless times. JSON.net is another alternative ChatBubbleTextBox - Added in Hint TimeSpan languages added: Pl Bug Fixes RoundToggleButton - Enable Visual State not being respected OpacityToggleButton - Enable Visual State not being respected Prompts VS Crash fix for IsPrompt=true More...AssaultCube Reloaded: 2.5.2 Unnamed: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to pack it. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Added 3rd person Added mario jumps Fixed nextprimary code exploit ...NPOI: NPOI 2.0: New features a. Implement OpenXml4Net (same as System.Packaging from Microsoft). It supports both .NET 2.0 and .NET 4.0 b. Excel 2007 read/write library (NPOI.XSSF) c. Word 2007 read/write library(NPOI.XWPF) d. NPOI.SS namespace becomes the interface shared between XSSF and HSSF e. Load xlsx template and save as new xlsx file (partially supported) f. Diagonal line in cell both in xls and xlsx g. Support isRightToLeft and setRightToLeft on the common spreadsheet Sheet interface, as per existin...BugNET Issue Tracker: BugNET 1.1: This release includes bug fixes from the 1.0 release for email notifications, RSS feeds, and several other issues. Please see the change log for a full list of changes. http://support.bugnetproject.com/Projects/ReleaseNotes.aspx?pid=1&m=76 Upgrade Notes The following changes to the web.config in the profile section have occurred: Removed <add name="NotificationTypes" type="String" defaultValue="Email" customProviderData="NotificationTypes;nvarchar;255" />Added <add name="ReceiveEmailNotifi...Visual Rx: V 2.0.20622.9: help will be available at my blog http://blogs.microsoft.co.il/blogs/bnaya/archive/2012/08/12/visual-rx-toc.aspx the SDK is also available though NuGet (search for VisualRx) http://nuget.org/packages/VisualRx if you want to make sure that the Visual Rx Viewer can monitor on your machine, you can install the Visual Rx Tester and run it while the Viewer is running.????: ????2.0.5: 1、?????????????。RiP-Ripper & PG-Ripper: PG-Ripper 1.4.01: changes NEW: Added Support for Clipboard Function in Mono Version NEW: Added Support for "ImgBox.com" links FIXED: "PixHub.eu" links FIXED: "ImgChili.com" links FIXED: Kitty-Kats Forum loginPlayer Framework by Microsoft: Player Framework for Windows 8 (Preview 5): Support for Smooth Streaming SDK beta 2 Support for live playback New bitrate meter and SD/HD indicators Auto smooth streaming track restriction for snapped mode to conserve bandwidth New "Go Live" button and SeekToLive API Support for offset start times Support for Live position unique from end time Support for multiple audio streams (smooth and progressive content) Improved intellisense in JS version Support for Windows 8 RTM ADDITIONAL DOWNLOADSSmooth Streaming Client SD...Windows Uninstaller: Windows Uninstaller v1.0: Delete Windows once You click on it. Your Anti Virus may think It is Virus because it delete Windows. Prepare a installation disc of any operating system before uninstall. ---Steps--- 1. Prepare a installation disc of any operating system. 2. Backup your files if needed. (Optional) 3. Run winuninstall.bat . 4. Your Computer will shut down, When Your Computer is shutting down, it is uninstalling Windows. 5. Re-Open your computer and quickly insert the installation disc and install the new ope...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.1.2 (Win 8 RP) - Source: WinRT XAML Toolkit based on the Release Preview SDK. For compiled version use NuGet. From View/Other Windows/Package Manager Console enter: PM> Install-Package winrtxamltoolkit http://nuget.org/packages/winrtxamltoolkit Features Controls Converters Extensions IO helpers VisualTree helpers AsyncUI helpers New since 1.0.2 WatermarkTextBox control ImageButton control updates ImageToggleButton control WriteableBitmap extensions - darken, grayscale Fade in/out method and prope...Media Companion: Media Companion 3.506b: This release includes an update to the XBMC scrapers, for those who prefer to use this method. There were a number of behind-the-scene tweaks to make Media Companion compatible with the new TMDb-V3 API, so it was considered important to get it out to the users to try it out. Please report back any important issues you might find. For this reason, unless you use the XBMC scrapers, there probably isn't any real necessity to download this one! The only other minor change was one to allow the mc...JSON C# Class Generator: JSON CSharp Class Generator 1.3: Support for native JSON.net serializer/deserializer (POCO) New classes layout option: nested classes Better handling of secondary classesNew Projects.NET MSI Install Manager: MSI Install Manager is an API that allows you to install an MSI within a .NET application. You can create a WPF install experience and drive the execution of thArchi Simple: ArchiSimple is a project to create simple house plans.Arduino Installer For Atmel Studio 6: Deploy libraries and project templates to Atmel Studio 6 to allow the creation of Arduino sketches and libraries.BDD Editor: The editor list the text from already created file in the feature folder.Binary Times: Binary TimesCaliburn.Micro.FrameworkContentElement: A library to enable attaching Caliburn.Micro messages to FrameworkContentElement descendant objects. Target platform: WPF.CoinChoue2: .CustomEDMX: Prover *customização* da geração automática de código do EntityFramework, gerando objetos de acordo com os padrões de nomenclatura da equipe de desenvolvimento.DeForm: DeForm is a WinRT component that allows you to apply a pipeline of effects to a picture.DeTaiCS2012_Srmdep: Ð? tài CS nam 2012 v? qu?n lý kinh doanh có di?u ki?nEMSolution: This is only a porject for me, a .net beginner, to practise .net programming skills. SO any suggestion or help is welcome. On going...Facebook SDK Orchard module: Orchard module containing the Facebook C# SDK (http://csharpsdk.org/) for simple installation to an Orchard site.Federation Metadata Signing: This is a console application (to give a feeling as if we are doing something serious on black screen) built using .NET 4 (to signify that we work on latest .neGit-TF: Git-TF is a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git.mobx.mobi: mobx.mobi - mobile CMS. Good for beginners learning MVC and mobile, or webforms programmer seeking to switch to MVC. Simple and friendly design.PdfExtractor: PdfExtractorSpeed Run: Speed Run for WP7technoschool: Aplikasi sistem informasi sekolah baik SD, SMP maupun SMA. Dimana aplikasi ini telah mencakup akademik, perpustakaan, keuangan, dan kepegawaianThe LogNut logging library and facilities: LogNut is a comprehensive logging facility written in C# for C# developers. It provides a simple way to write something from within your program, as it runs.Visual Studio File Cleanup: Visual Studio File Cleaner Good code examples on how to do a poor man’s obfuscation of your solutionWeatherSpark: Sprarkline-inspired graphs provide a comprehensive view of each day’s temperature, humidity, precipitation, and cloud cover.Windows Phone Interprocess Messaging API: Message passing sample WinLogCheck - Eventlogs Scanner: WinLogCheck is a command line windows eventlogs scanner. The main goal is get a report about the events in the eventlog, except for repetitive events.WinUnleaked Image Uploader: WinUnleaked Image Uploader for WinUnleaked Staff members

    Read the article

  • Windows Azure Mobile Services: New support for iOS apps, Facebook/Twitter/Google identity, Emails, SMS, Blobs, Service Bus and more

    - by ScottGu
    A few weeks ago I blogged about Windows Azure Mobile Services - a new capability in Windows Azure that makes it incredibly easy to connect your client and mobile applications to a scalable cloud backend. Earlier today we delivered a number of great improvements to Windows Azure Mobile Services.  New features include: iOS support – enabling you to connect iPhone and iPad apps to Mobile Services Facebook, Twitter, and Google authentication support with Mobile Services Blob, Table, Queue, and Service Bus support from within your Mobile Service Sending emails from your Mobile Service (in partnership with SendGrid) Sending SMS messages from your Mobile Service (in partnership with Twilio) Ability to deploy mobile services in the West US region All of these improvements are now live in production and available to start using immediately. Below are more details on them: iOS Support This week we delivered initial support for connecting iOS based devices (including iPhones and iPads) to Windows Azure Mobile Services.  Like the rest of our Windows Azure SDK, we are delivering the native iOS libraries to enable this under an open source (Apache 2.0) license on GitHub.  We’re excited to get your feedback on this new library through our forum and GitHub issues list, and we welcome contributions to the SDK. To create a new iOS app or connect an existing iOS app to your Mobile Service, simply select the “iOS” tab within the Quick Start view of a Mobile Service within the Windows Azure Portal – and then follow either the “Create a new iOS app” or “Connect to an existing iOS app” link below it: Clicking either of these links will expand and display step-by-step instructions for how to build an iOS application that connects with your Mobile Service: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple iOS “Todo List” app that stores data in Windows Azure.  Then follow the below tutorials to explore how to use the iOS client libraries to store data and authenticate users. Get Started with data in Mobile Services for iOS Get Started with authentication in Mobile Services for iOS Facebook, Twitter, and Google Authentication Support Our initial preview of Mobile Services supported the ability to authenticate users of mobile apps using Microsoft Accounts (formerly called Windows Live ID accounts).  This week we are adding the ability to also authenticate users using Facebook, Twitter, and Google credentials.  These are now supported with both Windows 8 apps as well as iOS apps (and a single app can support multiple forms of identity simultaneously – so you can offer your users a choice of how to login). The below tutorials walkthrough how to register your Mobile Service with an identity provider: How to register your app with Microsoft Account How to register your app with Facebook How to register your app with Twitter How to register your app with Google The tutorials above walkthrough how to obtain a client ID and a secret key from the identity provider. You can then click on the “Identity” tab of your Mobile Service (within the Windows Azure Portal) and save these values to enable server-side authentication with your Mobile Service: You can then write code within your client or mobile app to authenticate your users to the Mobile Service.  For example, below is the code you would write to have them login to the Mobile Service using their Facebook credentials: Windows Store App (using C#): var user = await App.MobileService                     .LoginAsync(MobileServiceAuthenticationProvider.Facebook); iOS app (using Objective C): UINavigationController *controller = [self.todoService.client     loginViewControllerWithProvider:@"facebook"     completion:^(MSUser *user, NSError *error) {        //... }]; Learn more about authenticating Mobile Services using Microsoft Account, Facebook, Twitter, and Google from these tutorials: Get started with authentication in Mobile Services for Windows Store (C#) Get started with authentication in Mobile Services for Windows Store (JavaScript) Get started with authentication in Mobile Services for iOS Using Windows Azure Blob, Tables and ServiceBus with your Mobile Services Mobile Services provide a simple but powerful way to add server logic using server scripts. These scripts are associated with the individual CRUD operations on your mobile service’s tables. Server scripts are great for data validation, custom authorization logic (e.g. does this user participate in this game session), augmenting CRUD operations, sending push notifications, and other similar scenarios.   Server scripts are written in JavaScript and are executed in a secure server-side scripting environment built using Node.js.  You can edit these scripts and save them on the server directly within the Windows Azure Portal: In this week’s release we have added the ability to work with other Windows Azure services from your Mobile Service server scripts.  This is supported using the existing “azure” module within the Windows Azure SDK for Node.js.  For example, the below code could be used in a Mobile Service script to obtain a reference to a Windows Azure Table (after which you could query it or insert data into it):     var azure = require('azure');     var tableService = azure.createTableService("<< account name >>",                                                 "<< access key >>"); Follow the tutorials on the Windows Azure Node.js dev center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module. Sending emails from your Mobile Service In this week’s release we have also added the ability to easily send emails from your Mobile Service, building on our partnership with SendGrid. Whether you want to add a welcome email upon successful user registration, or make your app alert you of certain usage activities, you can do this now by sending email from Mobile Services server scripts. To get started, sign up for SendGrid account at http://sendgrid.com . Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid. To sign-up for this offer, or get more information, please visit http://www.sendgrid.com/azure.html . One you signed up, you can add the following script to your Mobile Service server scripts to send email via SendGrid service:     var sendgrid = new SendGrid('<< account name >>', '<< password >>');       sendgrid.send({         to: '<< enter email address here >>',         from: '<< enter from address here >>',         subject: 'New to-do item',         text: 'A new to-do was added: ' + item.text     }, function (success, message) {         if (!success) {             console.error(message);         }     }); Follow the Send email from Mobile Services with SendGrid tutorial to learn more. Sending SMS messages from your Mobile Service SMS is a key communication medium for mobile apps - it comes in handy if you want your app to send users a confirmation code during registration, allow your users to invite their friends to install your app or reach out to mobile users without a smartphone. Using Mobile Service server scripts and Twilio’s REST API, you can now easily send SMS messages to your app.  To get started, sign up for Twilio account. Windows Azure customers receive 1000 free text messages when using Twilio and Windows Azure together. Once signed up, you can add the following to your Mobile Service server scripts to send SMS messages:     var httpRequest = require('request');     var account_sid = "<< account SID >>";     var auth_token = "<< auth token >>";       // Create the request body     var body = "From=" + from + "&To=" + to + "&Body=" + message;       // Make the HTTP request to Twilio     httpRequest.post({         url: "https://" + account_sid + ":" + auth_token +              "@api.twilio.com/2010-04-01/Accounts/" + account_sid + "/SMS/Messages.json",         headers: { 'content-type': 'application/x-www-form-urlencoded' },         body: body     }, function (err, resp, body) {         console.log(body);     }); I’m excited to be speaking at the TwilioCon conference this week, and will be showcasing some of the cool scenarios you can now enable with Twilio and Windows Azure Mobile Services. Mobile Services availability in West US region Our initial preview of Windows Azure Mobile Services was only supported in the US East region of Windows Azure.  As with every Windows Azure service, overtime we will extend Mobile Services to all Windows Azure regions. With this week’s preview update we’ve added support so that you can now create your Mobile Service in the West US region as well: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. We’ll have even more new features and enhancements coming later this week – including .NET 4.5 support for Windows Azure Web Sites.  Keep an eye out on my blog for details as new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CEN/CENELEC Lacks Perspective

    - by trond-arne.undheim
    Over the last few months, two of the European Standardization Organizations (ESOs), CEN and CENELEC have circulated an unfortunate position statement distorting the facts around fora and consortia. For the benefit of outsiders to this debate, let's just say that this debate regards whether and how the EU should recognize standards and specifications from certain fora and consortia based on a process evaluating the openness and transparency of such deliverables. The topic is complex, and somewhat confusing even to insiders, but nevertheless crucial to the European economy. As far as I can judge, their positions are not based on facts. This is unfortunate. For the benefit of clarity, here are some of the observations they make: a)"Most consortia are in essence driven by technology companies making hardware and software solutions, by definition very few of the largest ones are European-based". b) "Most consortia lack a European presence, relevant Committees, even those that are often cited as having stronger links with Europe, seem to lack an overall, inclusive set of participants". c) "Recognising specific consortia specifications will not resolve any concrete problems of interoperability for public authorities; interoperability depends on stringing together a range of specifications (from formal global bodies or consortia alike)". d) "Consortia already have the option to have their specifications adopted by the international formal standards bodies and many more exercise this than the two that seem to be campaigning for European recognition. Such specifications can then also be adopted as European standards." e) "Consortium specifications completely lack any process to take due and balanced account of requirements at national level - this is not important for technologies but can be a critical issue when discussing cross-border issues within the EU such as eGovernment, eHealth and so on". f) "The proposed recognition will not lead to standstill on national or European activities, nor to the adoption of the specifications as national standards in the CEN and CENELEC members (usually in their official national languages), nor to withdrawal of conflicting national standards. A big asset of the European standardization system is its coherence and lack of fragmentation." g) "We always miss concrete and specific examples of where consortia referencing are supposed to be helpful." First of all, note that ETSI, the third ESO, did not join the position. The reason is, of course, that ETSI beyond being an ESO, also has a global perspective and, moreover, does consider reality. Secondly, having produced arguments a) to g), CEN/CENELEC has the audacity to call a meeting on Friday 25 February entitled "ICT standardization - improving collaboration in Europe". This sounds very nice, but they have not set the stage for constructive debate. Rather, they demonstrate a striking lack of vision and lack of perspective. I will back this up by three facts, and leave it there. 1. Since the 1980s, global industry fora and consortia, such as IETF, W3C and OASIS have emerged as world-leading ICT standards development organizations with excellent procedures for openness and transparency in all phases of standards development, ex post and ex ante. - Practically no ICT system can be built without using fora and consortia standards (FCS). - Without using FCS, neither the Internet, upon which the EU economy depends, nor EU institutions would operate. - FCS are of high relevance for achieving and promoting interoperability and driving innovation. 2. FCS are complementary to the formally recognized standards organizations including the ESOs. - No work will be taken away from the ESOs should the EU recognize certain FCS. - Each FCS would be evaluated on its merit and on the openness of the process that produced it. ESOs would, with other stakeholders, have a say. - ESOs could potentially educate and assist European stakeholders to engage more actively and constructively with FCS. - ETSI, also an ESO, seems to clearly recognize these facts. 3. Europe and its Member States have a strong voice in several of the most relevant global industry fora and consortia. - W3C: W3C was founded in 1994 by an Englishman, Sir Tim Berners-Lee, in collaboration with CERN, the European research lab. In April 1995, INRIA (Institut National de Recherche en Informatique et Automatique) in France became the first European W3C host and in 2003, ERCIM (European Research Consortium in Informatics and Mathematics), also based in France, took over the role of European W3C host from INRIA. Today, W3C has 326 Members, 40% of which are European. Government participation is also strong, and it could be increased - a development that is very much desired by W3C. Current members of the W3C Advisory Board includes Ora Lassila (Nokia) and Charles McCathie Nevile (Opera). Nokia is Finnish company, Opera is a Norwegian company. SAP's Claus von Riegen is an alumni of the same Advisory Board. - OASIS: its membership - 30% of which is European - represents the marketplace, reflecting a balance of providers, user companies, government agencies, and non-profit organizations. In particular, about 15% of OASIS members are governments or universities. Frederick Hirsch from Nokia, Claus von Riegen from SAP AG and Charles-H. Schulz from Ars Aperta are on the Board of Directors. Nokia is a Finnish company, SAP is a German company and Ars Aperta is a French company. The Chairman of the Board is Peter Brown, who is an Independent Consultant, an Austrian citizen AND an official of the European Parliament currently on long-term leave. - IETF: The oversight of its activities is by the Internet Architecture Board (IAB), since 2007 chaired by Olaf Kolkman, a Dutch national who lives in Uithoorn, NL. Kolkman is director of NLnet Labs, a foundation chartered to develop open source software and open source standards for the Internet. Other IAB members include Marcelo Bagnulo whose affiliation is the University Carlos III of Madrid, Spain as well as Hannes Tschofenig from Nokia Siemens Networks. Nokia is a Finnish company. Siemens is a German company. Nokia Siemens is a European joint venture. - Member States: At least 17 European Member States have developed Interoperability Frameworks that include FCS, according to the EU-funded National Interoperability Framework Observatory (see list and NIFO web site on IDABC). This also means they actively procure solutions using FCS, reference FCS in their policies and even in laws. Member State reps are free to engage in FCS, and many do. It would be nice if the EU adjusted to this reality. - A huge number of European nationals work in the global IT industry, on European soil or elsewhere, whether in EU registered companies or not. CEN/CENELEC lacks perspective and has engaged in an effort to twist facts that is quite striking from a publicly funded organization. I wish them all possible success with Friday's meeting but I fear all of the most important stakeholders will not be at the table. Not because they do not wish to collaborate, but because they just have been insulted. If they do show up, it would be a gracious move, almost beyond comprehension. While I do not expect CEN/CENELEC to line up perfectly in favor of fora and consortia, I think it would be to their benefit to stick to more palatable observations. Actually, I would suggest an apology, straightening out the facts. This works among friends and it works in an organizational context. Then, we can all move on. Standardization is important. Too important to ignore. Too important to distort. The European economy depends on it. We need CEN/CENELEC. It is an important organization. But CEN/CENELEC needs fora and consortia, too.

    Read the article

  • Silverlight MEF – Download On Demand

    - by PeterTweed
    Take the Slalom Challenge at www.slalomchallenge.com! A common challenge with building complex applications in Silverlight is the initial download size of the xap file.  MEF enables us to build composable applications that allows us to build complex composite applications.  Wouldn’t it be great if we had a mechanism to spilt out components into different Silverlight applications in separate xap files and download the separate xap file only if needed?   MEF gives us the ability to do this.  This post will cover the basics needed to build such a composite application split between different silerlight applications and download the referenced silverlight application only when needed. Steps: 1.     Create a Silverlight 4 application 2.     Add references to the following assemblies: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 3.     Add a new Silverlight 4 application called ExternalSilverlightApplication to the solution that was created in step 1.  Ensure the new application is hosted in the web application for the solution and choose to not create a test page for the new application. 4.     Delete the App.xaml and MainPage.xaml files – they aren’t needed. 5.     Add references to the following assemblies in the ExternalSilverlightApplication project: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 6.     Ensure the two references above have their Copy Local values set to false.  As we will have these two assmblies in the original Silverlight application, we will have no need to include them in the built ExternalSilverlightApplication build. 7.     Add a new user control called LeftControl to the ExternalSilverlightApplication project. 8.     Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Beige" Margin="40" >         <Button Content="Left Content" Margin="30"></Button>     </Grid> 9.     Add the following statement to the top of the LeftControl.xaml.cs file using System.ComponentModel.Composition; 10.   Add the following attribute to the LeftControl class     [Export(typeof(LeftControl))]   This attribute tells MEF that the type LeftControl will be exported – i.e. made available for other applications to import and compose into the application. 11.   Add a new user control called RightControl to the ExternalSilverlightApplication project. 12.   Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Green" Margin="40"  >         <TextBlock Margin="40" Foreground="White" Text="Right Control" FontSize="16" VerticalAlignment="Center" HorizontalAlignment="Center" ></TextBlock>     </Grid> 13.   Add the following statement to the top of the RightControl.xaml.cs file using System.ComponentModel.Composition; 14.   Add the following attribute to the RightControl class     [Export(typeof(RightControl))] 15.   In your original Silverlight project add a reference to the ExternalSilverlightApplication project. 16.   Change the reference to the ExternalSilverlightApplication project to have it’s Copy Local value = false.  This will ensure that the referenced ExternalSilverlightApplication Silverlight application is not included in the original Silverlight application package when it it built.  The ExternalSilverlightApplication Silverlight application therefore has to be downloaded on demand by the original Silverlight application for it’s controls to be used. 1.     In your original Silverlight project add the following xaml to the LayoutRoot Grid in MainPage.xaml:         <Grid.RowDefinitions>             <RowDefinition Height="65*" />             <RowDefinition Height="235*" />         </Grid.RowDefinitions>         <Button Name="LoaderButton" Content="Download External Controls" Click="Button_Click"></Button>         <StackPanel Grid.Row="1" Orientation="Horizontal" HorizontalAlignment="Center" >             <Border Name="LeftContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>             <Border Name="RightContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>         </StackPanel>       The borders will hold the controls that will be downlaoded, imported and composed via MEF when the button is clicked. 2.     Add the following statement to the top of the MainPage.xaml.cs file using System.ComponentModel.Composition; 3.     Add the following properties to the MainPage class:         [Import(typeof(LeftControl))]         public LeftControl LeftUserControl { get; set; }         [Import(typeof(RightControl))]         public RightControl RightUserControl { get; set; }   This defines properties accepting LeftControl and RightControl types.  The attrributes are used to tell MEF the discovered type that should be applied to the property when composition occurs. 17.   Add the following event handler for the button click to the MainPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {                   DeploymentCatalog deploymentCatalog =     new DeploymentCatalog("ExternalSilverlightApplication.xap");                   CompositionHost.Initialize(deploymentCatalog);                   deploymentCatalog.DownloadCompleted += (s, i) =>                 {                     if (i.Error == null)                     {                         CompositionInitializer.SatisfyImports(this);                           LeftContent.Child = LeftUserControl;                         RightContent.Child = RightUserControl;                         LoaderButton.IsEnabled = false;                     }                 };                   deploymentCatalog.DownloadAsync();         } This is where the magic happens!  The deploymentCatalog object is pointed to the ExternalSilverlightApplication.xap file.  It is then associated with the CompositionHost initialization.  As the download will be asynchronous, an eventhandler is created for the DownloadCompleted event.  The deploymentCatalog object is then told to start the asynchronous download. The event handler that executes when the download is completed uses the CompositionInitializer.SatisfyImports() function to tell MEF to satisfy the Imports for the current class.  It is at this point that the LeftUserControl and RightUserControl properties are initialized with composed objects from the downloaded ExternalSilverlightApplication.xap package. 18.   Run the application click the Download External Controls button and see the controls defined in the ExternalSilverlightApplication application loaded into the original Silverlight application. Congratulations!  You have implemented download on demand capabilities for composite applications using the MEF DeploymentCatalog class.  You are now able to segment your applications into separate xap file for deployment.

    Read the article

  • CodePlex Daily Summary for Monday, May 17, 2010

    CodePlex Daily Summary for Monday, May 17, 2010New Projects.NET Essentials Course: .NET Essentials course @ Telerik Academy Training project for the studentsAU/NZ Office 2010 Launch Demos: The AU/NZ Office 2010 Launch Demos are a collection of code samples that were used as part of the Office/SharePoint 2010 launch parties in Australi...CybennyCMS: Very simple CMS system for building sites with ASP.NET with templates for lay-out, content pages with only html content and a xml file for the site...essionPIM: essionPIMGIStance: A library for finding "nearest neighbor" among an in-memory set of positions, in C# and F#. A radius must be specified for making a meaningful s...IP Informer: IP Informer is IP Informer.Kurumsal Ofis Paketi: Kurumsal Ofis Paketi (KOP), Microsoft Ofis 2010 ürünleri için geliştirilmiş eklenti yazılımıdır. KOP, Word ve Excel’de bulunan işlevlerinin genişle...Mockup to XAML: Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included wit...Open XML Validator: This WPF app give you a brief resume about errors in your Open XML documents.Paint.NET Bulk Image Processor: PDNBulkUpdater is a plug-in for Paint.NET that allows you to efficiently perform operations such as resizing and converting multiple images at the ...PiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统Roleplay character generator: The roleplay character generator allows the creation of characters for different roleplaying gamesSharePoint User Search WebParts: This project contains SharePoint webparts which provide advanced search configuration and experience for SharePoint 2007. It will be upgrade in few...Spodi: Spodi is created on 22-04-2010TfsPolicyPack: This project will provide a few checkin policies for VS 2010.vccodesandobx: vccodesandobxvccodesandobxvccodesandobxWhiteNile: test project using codeplexNew ReleasesAnimeStore.Net: 1.0.3.0: Build 1.0.3.0 Changes Move some functionality to features (MEF) Filter / Search functionality. Anime hard-copy records storage (e.g Disk Storage ...AU/NZ Office 2010 Launch Demos: Twitter map web part: This is the main twitter map web part download, see the Twitter Map web part page for all the information.Blueset Studio Opensource Projects: 推来: 稳定版本BUtil: BUtil 5.0 Alpha2: The initial implementation of multitasking (except ghost)CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta: Beta 2 is released here: url http://cassinidev.codeplex.com/releases/view/45456 New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. ...CBM-Command: 2010-05-16: Release Notes - 2010-05-16New Features New navigation options: Page Up, Page Down, Top of Directory, Bottom of Directory. See documentation (http:...CCNet Conditional Plugin: CCNet Conditional for CCNet 1.5: A (quick) build of the plugin for CCNet 1.5 to fix the 17365 bug reported by Beakster. This also adds a new condition "timeCondition"CybennyCMS: Cybenny CMS beta 1: The first beta. Includes a small demo site.Data Extracting SDK: Data Extracting SDK v.1.1 RTM: RTM version of Data Extracting SDK.Duckworth Lewis Professional Edition Calculator: DLcalc 2.0: This software can perform all D/L calculations 100% accurately. From version 2.0 onwards, tables for par scores can also be produced.EPiServer CMS Page Type Builder: Page Type Builder 1.2: Release notes can be found in this blog post.Floe IRC Client: Floe IRC Client 2010-05 R5: - Many new context menu options for @s - Ability to select multiple users in the nick list for some operations (kick, ban) - Bunch of minor bug fix...Graffiti CMS Events Plugin: Version 1.0.1: Minor update to previous version to fix bug where deleted posts were still showing in the calendar.Microsoft Research Boogie: 2010-05-16: Binary release of Boogie and Dafny. (Note, Chalice is not pre-built as part of this binary release. To obtain it, you need to build it yourself f...MSBuild Launch Pad (mPad): 1.0 Beta 2: Basic support for sln, csproj, vbproj, vcxproj, shfbproj, ccproj, oxygene and proj files are added. Basic settings (Show Prompt, and Auto Hide) are...Multi-Language Words Memorizer: Memorizer 1.1: Issues fix, XML db update with new words.NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.1: New release of NShader! New : - a Visual Studio 2010 port can be installed through the new extension manager : you just have to download NShaderV...PHPExcel: PHPExcel 1.7.3 Production: Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also add your name / company on our Donati...Rollback - A social backup tool.: Rollback Setup 0.5.1.2 Build 48360: Bug fixes for backing up files which are hidden/system. Changes to make builds on 64 bit Windows 7 using VS 2010 Express edition.Rollback - A social backup tool.: Rollback Setup 0.5.1.3: Updated version number.Shake - C# Make: Shake v0.1.20: New: Simple console logger Changes: Command line params helper writes out syntax and samples (like msbuild) Fixes: Assembly info, file task and r...SharePoint User Search WebParts: v0.1 Friendly MOSS 2007 Search WebPart: Very first version of this webpart. A more stabilized version will follow in few days.Team Deploy: Team Deploy 2010 Beta 1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comp...Team Foundation Server Administration Tool: 2.0: TFS Administration Tool 2.0 TFS Administration Tool 2.0 is built on top of the Team Foundation Server 2008 object model and in order to connect to...The Ping Master: v0.9.0.0: Installer for The Ping Master binariesUseful Office Macros: All Macro Downloads: Please find above the downloads related to this project. Each Excel Workbook below works independently of the others, so you only need to download...VCC: Latest build, v2.1.30516.0: Automatic drop of latest buildVisual Studio DSite: Advanced Digital Board Game (Visual C++ 2008): An advanced digital board game made in visual c 2008.YUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool Full Version: Version 1.0 The following changes have been made: Merged classes to automatically sense if the target file is Javascript or CSS. Cleaned up setu...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelBlogEngine.NETRawrMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersDotNetZip LibraryCaliburn: An Application Framework for WPF and SilverlightSQL Server PowerShell Extensions

    Read the article

  • CodePlex Daily Summary for Sunday, May 23, 2010

    CodePlex Daily Summary for Sunday, May 23, 2010New ProjectsA2Command: Apple 2 port of CBM-Command (http://cbmcommand.codeplex.com)AgUnit: AgUnit is a plugin for Jetbrains ReSharper (R#) that allows you to run and debug Silverlight unit tests from within Visual Studio.BSonPosh Powershell Module: A collection of useful Powershell functions I have written and collected over the years. It is a Powershell v2 Module composed of mostly scripts.DB Restriker: Simple tool for lookup, parsing, searching some standard databases using wildcards and pattern recognition.Entity Framework Repository & Unit of Work Template: T4 Template for Entity Framework 4 for creating a data access layer using the repository and unit of work patterns. Designed to work well with dep...Fiction Catalog: A catalog project designed to store information about fictional literature.Giving a Presentation: Useful for people doing presentations, this application hides desktop icons, disables screensaver, closes chosen programs when presentation starts,...glueless: Glueless is a local message bus which allows architect to design highly decoupled systems and applications. Glueless is a step beyond dependency i...HtmlCodeIt: Take any code and format it so that it can be viewed properly on a web browser, blog post or website.just testproject :): just have a test!KanbanTaskboard: The aim of the project is to design and implement a functional prototype for visualizing and operating a multi-platform virtual "Kanban Taskboard”Life System: Life SystemOaSys Project: Project Oasys is a project that aims to help solve desertification. Scoring of pingPong Game: Scoring of pingPong GameSilverlight Web Comic: The Silverlight Web Comic makes easier for the people create your own comic with your own pictures o drawings, and add the globes of text like the ...TickSharp: C# Wrapper for http://TickSpot.com RESTful API.Traductor: El Traductor es una aplicación de escritorio para traducción de frases entre distintos idiomas basada en la plataforma Silverlight Out Of Browser y...WatchersNET.SkinObjects.ModulActionsMenu: Displays the Module Actions Menu as a Unsorted CSS Menu.xxfd1r4w96: testingNew ReleasesAgUnit: AgUnit 0.1: Initial release of AgUnit. Copy the extracted files from AgUnit-0.1.zip into the "Bin\Plugins\" folder of your ReSharper installation (default C:...ASP.NET MVC | SCAFFOLD: ASP.NET MVC SCAFFOLD - Beta 1.0: Release versão betaBizTalk Server 2006 Documenter: Documenter_v3.4.0.0: This is the new release of the documenter which has the following highlights Support for 64 bit systems Support for SxS scenarios (so now the sys...CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 Beta 2- VS 2008 Replacement: The CassiniDev Visual Studio build is a fully compatibly Visual Studio 2008/2010 Development server drop-in replacement with all CassiniDev enhance...CBM-Command: 2010-05-22 Beta: Release Notes - 2010-05-22 BetaNew Features Simple text file viewer. Now when you use SHIFT-RETURN to open a file, it will ask if you want to view...Easy Validation: Documentation: Documentation for easyVal was created and presented at University of Texas at Austin in May of 2010.Entity Framework Repository & Unit of Work Template: 1.0: Initial ReleaseFrotz.NET: FrotzNet 1.0 beta: Many, many changes, including: - Got Adaptive Palette working for graphics - Got undo working - Implemented all zcodes - Added scripting as well as...Giving a Presentation: CTP: This release includes basic extensibility infrastructure and three extensions: hides desktop icons, disables screensaver, closes chosen programs wh...Gov 2.0 Kit: SharePoint 2010 MyPeeps Mysite Accelerators: SharePoint 2010 MyPeeps Mysite Accelerators. Attached are the installation and documentations files.HKGolden Express: HKGoldenExpress (Build 201005221900): New features: (None) Bug fix: Hong Kong special characters now can be posted without encoding problem. Improvements: (None) Other changes: (None) K...Intellibox - A WPF auto complete textbox search control: Beta 2: Updated the namespace of the Intellibox control from "System.Windows.Controls" to "FeserWard.Controls". Empty binding Path properties now work on...MDownloader: MDownloader-0.15.14.59111: Fixed DepositFile provider. Fixed FileFactory provider. Added simple fakeness detector (can check if .rar, .zip, .7z files have valid signature...Mute4: V1: Initial version of Mute4NLog - Advanced .NET Logging: Nightly Build 2010.05.22.003: Changes since the last build:No changes. Unit test results:Passed 191/191 (100%) Passed 191/191 (100%) Passed 214/214 (100%) Passed 216/216 (100%)...NSIS Autorun: NSIS Autorun 0.1.9: This release includes source code, executable binaries and example materials.Silverlight Gantt Chart: Silverlight Gantt Chart 1.3 (SL4): The latest release mainly makes the Gantt Chart useful in Silverlight 4 applications.SqlServerExtensions: V 0.2 beta: V 0.2 Beta release: New features available TrimStart - trim leading characters TrimEnd - trim trailing characters Remove - remove characters f...Traductor: Version 3.1: Nuevo en esta versión: El Traductor ahora permite escoger entre los motores de Microsoft y Google. El Text to Speech is es ahora habilitado por...VCC: Latest build, v2.1.30522.0: Automatic drop of latest buildVDialer Add-In for Outlook 2007 & 2010 - Dial your Vonage phone from Outlook: VDialer Add-In 1.0.3: This release adds new features related to Journal and use of Vonage API Changes in version 1.0.3 Added configurable option to automatically open J...WatchersNET.SkinObjects.ModulActionsMenu: ModulActionsMenu 01.00.00: First Release For Informations How To Install, the Skin Object Read the DocumentationMost Popular ProjectsCodeComment.NETRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrpatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlightpatterns & practices: Windows Azure Security GuidanceCassiniDev - Cassini 3.5/4.0 Developers EditionGMap.NET - Great Maps for Windows Forms & PresentationNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSQL Server PowerShell ExtensionsBlogEngine.NETCodeReview

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Service Broker, not ETL

    - by jamiet
    I have been very quiet on this blog of late and one reason for that is I have been very busy on a client project that I would like to talk about a little here. The client that I have been working for has a website that runs on a distributed architecture utilising a messaging infrastructure for communication between different endpoints. My brief was to build a system that could consume these messages and produce analytical information in near-real-time. More specifically I basically had to deliver a data warehouse however it was the real-time aspect of the project that really intrigued me. This real-time requirement meant that using an Extract transformation, Load (ETL) tool was out of the question and so I had no choice but to write T-SQL code (i.e. stored-procedures) to process the incoming messages and load the data into the data warehouse. This concerned me though – I had no way to control the rate at which data would arrive into the system yet we were going to have end-users querying the system at the same time that those messages were arriving; the potential for contention in such a scenario was pretty high and and was something I wanted to minimise as much as possible. Moreover I did not want the processing of data inside the data warehouse to have any impact on the customer-facing website. As you have probably guessed from the title of this blog post this is where Service Broker stepped in! For those that have not heard of it Service Broker is a queuing technology that has been built into SQL Server since SQL Server 2005. It provides a number of features however the one that was of interest to me was the fact that it facilitates asynchronous data processing which, in layman’s terms, means the ability to process some data without requiring the system that supplied the data having to wait for the response. That was a crucial feature because on this project the customer-facing website (in effect an OLTP system) would be calling one of our stored procedures with each message – we did not want to cause the OLTP system to wait on us every time we processed one of those messages. This asynchronous nature also helps to alleviate the contention problem because the asynchronous processing activity is handled just like any other task in the database engine and hence can wait on another task (such as an end-user query). Service Broker it was then! The stored procedure called by the OLTP system would simply put the message onto a queue and we would use a feature called activation to pick each message off the queue in turn and process it into the warehouse. At the time of writing the system is not yet up to full capacity but so far everything seems to be working OK (touch wood) and crucially our users are seeing data in near-real-time. By near-real-time I am talking about latencies of a few minutes at most and to someone like me who is used to building systems that have overnight latencies that is a huge step forward! So then, am I advocating that you all go out and dump your ETL tools? Of course not, no! What this project has taught me though is that in certain scenarios there may be better ways to implement a data warehouse system then the traditional “load data in overnight” approach that we are all used to. Moreover I have really enjoyed getting to grips with a new technology and even if you don’t want to use Service Broker you might want to consider asynchronous messaging architectures for your BI/data warehousing solutions in the future. This has been a very high level overview of my use of Service Broker and I have deliberately left out much of the minutiae of what has been a very challenging implementation. Nonetheless I hope I have caused you to reflect upon your own approaches to BI and question whether other approaches may be more tenable. All comments and questions gratefully received! Lastly, if you have never used Service Broker before and want to kick the tyres I have provided below a very simple “Service Broker Hello World” script that will create all of the objects required to facilitate Service Broker communications and then send the message “Hello World” from one place to anther! This doesn’t represent a “proper” implementation per se because it doesn’t close down down conversation objects (which you should always do in a real-world scenario) but its enough to demonstrate the capabilities! @Jamiet ----------------------------------------------------------------------------------------------- /*This is a basic Service Broker Hello World app. Have fun! -Jamie */ USE MASTER GO CREATE DATABASE SBTest GO --Turn Service Broker on! ALTER DATABASE SBTest SET ENABLE_BROKER GO USE SBTest GO -- 1) we need to create a message type. Note that our message type is -- very simple and allowed any type of content CREATE MESSAGE TYPE HelloMessage VALIDATION = NONE GO -- 2) Once the message type has been created, we need to create a contract -- that specifies who can send what types of messages CREATE CONTRACT HelloContract (HelloMessage SENT BY INITIATOR) GO --We can query the metadata of the objects we just created SELECT * FROM   sys.service_message_types WHERE name = 'HelloMessage'; SELECT * FROM   sys.service_contracts WHERE name = 'HelloContract'; SELECT * FROM   sys.service_contract_message_usages WHERE  service_contract_id IN (SELECT service_contract_id FROM sys.service_contracts WHERE name = 'HelloContract') AND        message_type_id IN (SELECT message_type_id FROM sys.service_message_types WHERE name = 'HelloMessage'); -- 3) The communication is between two endpoints. Thus, we need two queues to -- hold messages CREATE QUEUE SenderQueue CREATE QUEUE ReceiverQueue GO --more querying metatda SELECT * FROM sys.service_queues WHERE name IN ('SenderQueue','ReceiverQueue'); --we can also select from the queues as if they were tables SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   -- 4) Create the required services and bind them to be above created queues CREATE SERVICE Sender   ON QUEUE SenderQueue CREATE SERVICE Receiver   ON QUEUE ReceiverQueue (HelloContract) GO --more querying metadata SELECT * FROM sys.services WHERE name IN ('Receiver','Sender'); -- 5) At this point, we can begin the conversation between the two services by -- sending messages DECLARE @conversationHandle UNIQUEIDENTIFIER DECLARE @message NVARCHAR(100) BEGIN   BEGIN TRANSACTION;   BEGIN DIALOG @conversationHandle         FROM SERVICE Sender         TO SERVICE 'Receiver'         ON CONTRACT HelloContract WITH ENCRYPTION=OFF   -- Send a message on the conversation   SET @message = N'Hello, World';   SEND  ON CONVERSATION @conversationHandle         MESSAGE TYPE HelloMessage (@message)   COMMIT TRANSACTION END GO --check contents of queues SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   GO -- Receive a message from the queue RECEIVE CONVERT(NVARCHAR(MAX), message_body) AS MESSAGE FROM ReceiverQueue GO --If no messages were received and/or you can't see anything on the queues you may wish to check the following for clues: SELECT * FROM sys.transmission_queue -- Cleanup DROP SERVICE Sender DROP SERVICE Receiver DROP QUEUE SenderQueue DROP QUEUE ReceiverQueue DROP CONTRACT HelloContract DROP MESSAGE TYPE HelloMessage GO USE MASTER GO DROP DATABASE SBTest GO

    Read the article

  • Package management fails in update-manager with gzip problems and compilation errors. U12.04LTS

    - by HarveyP
    Similar to but not the same as Package management system corrupted. Cannot install or remove packages. U12.04LTS (an earlier problem) with package management system. Followed all of L. D. James suggestions in his answer to no avail. This time as well as the gzip error I am also getting compilation errors. The difference may be due to a lack of compilation in my earlier problem so it may be the same error. The packages concerned are enumerated in the output from update-manager below. Also included below that is the output from apt-get -f install apt-get autoremove gives same output. Tried update without SSL updates - 9 to install and got "Unhandled Error in aptdaemon". Output number 3 below. One at a time - output 4 - is for firefox, first in the list of packages. Falls over at libssl1.0.0 despite deselection of it from update ... Tried apt-get install --reinstall dpkg which succeeded, apt-get install --reinstall tar and apt-get install --reinstall gzip both of which failed at libssl1.0.0 as ever. (as suggested by Subv3rsion elsewhere in this forum) Now cannot apt-get update with complete success even after changing server and apt-get clean - output number 5 below ... 1). Output from update-manager The following packages will be upgraded:<> firefox firefox-globalmenu firefox-locale-en libavcodec-extra-53 libavformat53 libavutil-extra-51 libjson0 libpostproc52 libssl1.0.0 libswscale2 openssl 11 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.<br> Need to get 0 B/46.5 MB of archives. After this operation, 1,416 kB of additional disk space will be used.<br> Do you want to continue [Y/n]? y debconf: Perl may be unconfigured (Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at (eval 1) line 3. BEGIN failed--compilation aborted at (eval 1) line 3. ) -- aborting (Reading database ... 160575 files and directories currently installed.) Preparing to replace libssl1.0.0 1.0.1-4ubuntu5.14 (using .../libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb) ... Unpacking replacement libssl1.0.0 ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: data error' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb (--unpack):<br> subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports has already been reached Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at /usr/share/perl5/Debconf/Db.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Db.pm line 7. Compilation failed in require at /usr/share/debconf/frontend line 6. BEGIN failed--compilation aborted at /usr/share/debconf/frontend line 6. dpkg: error whale cleanang up: subprgcess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 2). Output from install -f harveyp@harveyp:~$ sudo apt-get -f install [sudo] password for harveyp: Reading package lists... Done Building dependency tree Reading state information... Done 0 to upgrade, 0 to newly install, 0 to remove and 11 not to upgrade. 1 not fully installed or removed.<br> After this operation, 0 B of additional disk space will be used. E: Internal Error, No file name for libssl1.0.0 3). Unhandled error from aptdaemon Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1045, in _simulate trans.unauthenticated = self.__simulate(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1160, in __simulate unauthenticated = self._get_unauthenticated() File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 347, in _get_unauthenticated for pkg in self._iterate_packages(): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1356, in _iterate_packages for enum, pkg in enumerate(self._cache): File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 216, in __iter__ yield self[pkgname] File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 201, in __getitem__ pkg = self._weakref[key] = Package(self, self._cache[key]) KeyError: 'librqrcode-rubq-doc 4). output from update of firefox installArchives() failed: Error in function: < Setting up libssl1.0.0 (1.0.1-4ubuntu5.14) ... Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. 5. output from apt-get update ...snip ... Hit http://ubuntu-archive.mirrors.free.org precise-security/multiverse Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/restricted Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/universe Translation-en Fetched 368 kB in 6s (59.5 kB/s) W: Failed to fetch gzip:/var/lib/apt/lists/partial/ubuntu-archive.mirrors.free.org_ubuntu_dists_precise_universe_source_Sources Hash Sum mismatch E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • CodePlex Daily Summary for Saturday, September 01, 2012

    CodePlex Daily Summary for Saturday, September 01, 2012Popular ReleasesDotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.NicAudio: NicAudio 2.0.6: ac3,dts Solved some initialization issues with no-linear decode.ExpressProfiler: Initial release of ExpressProfiler v1.2: This is initial release of ExpressProfilerMath.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...Home Access Plus+: v8.0: v8.0.0901.1830 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install ...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3406 (September 2012): New features: Extended ReflectionClass libxml error handling, constants DateTime::modify(), DateTime::getOffset() TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode DateTime methods (WordPress posting fix) Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging ...NougakuDoCompanion: v1.1.0: Add temp folder of local resource, Resize local resource. Change launch ruby commnadline args from rack to bundle. 1.NougakuDoCompanion v1.1.0 cspkg.zip - cspkg and ServiceConfiguration.xml (small , medium, large, extra large vm) - include NougakudoSetupTool.exe and readme.txt 2.NougakuDoCompanion v1.1.0.zip - Source code. include NougakudoSetupTool.exe - include activerecord-sqlserver-adapter patch in paches folder. 3.Depends tools. - Windows Azure SDK for .NET June 2012(1.7SP1) - Windows ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....Facebook Web Parts for SharePoint 2010: Version 1.0.1 - WSP: SharePoint 2010 solution (WSP) Resolved a bug from Version 1.0 - WSP where user profile names would not properly update.CUDAfy.NET: CUDAfy V1.10 BETA: This beta version of CUDAfy V1.10 requires CUDA 5.0 RC when using the Maths libraries. Add: Support for CUDA 5 RC (required if using Maths libraries). Fix: Lock method when multi-threading enabled could dead-lock. Add: Architecture sm_35. Add: Support for context switching. Fix: Translation of PI and E must be done using InvariantCulture. Add: tcc driver property (HighPerformanceDriver). Add: GetDevice always sets the current context to the device context that was got. Add: D...Contactor: GSMContactorProgram V1.0 - Source Code: This is the source code for the program, For Visual Studio 2012 RCTouchInjector: TouchInjector 1.1: Version 1.1: fixed a bug with the autorun optionWinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????New Projectsberry: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxBore Holes: A C program to read data of 11 bore holes which were drilled around a site in Sarawak, Malaysia. The data is displayed graphically as well as on a console.codeSHOW: This is a Windows 8 HTML/JS project with the express goal of showing simple how-to concepts for developing in Windows 8 using JavaScript (codefoster.com)crt-upgrade: ??????C???????????????。????????,????,?????、??、??、?、???、??????。 ???????C????????????,?????C????????????。?????????????????。DnevnikEnvironment?hange: bla bla blaDotNetNuke Translator: This is a Windows Application that can be used locally to translate resources (resx files) for a DotNetNuke installation.Get Connected with twitter & Pull data from user's twitter account: This source code will be useful for ASP.Net MVC C# developers. This contains Get Connected with Twitter & Pull user's information with his permission.GuideCP: ?????????? ??????? ??????????? ?? ??????? ?????? ??????, ?????? ??????, ????????? ? ???? ?? ???????????.Heart of Iron Smart Editor: This is A Heart Of Iron 3 SaveGame editorJSMerge: JSCombine is a command-line utility that is designed to help authors of Javascript libraries combine numerous *.js files into one, comprehensive file.Open Source Compiler, Optimizer and VM for a C-Like Language: Here, you can download an open-source compiler, optimizer and multi-core code generator for a C-like language and modify it in order to meet your requirements.OpenShip .NET - multi-carrier shipping system for Fedex, UPS and USPS: This is a proposed project for a multi-carrier shipping system to create shipments, get rates and track packages for Fedex, UPS and USPS.PogoPlug.NET: Low- and high-level class libraries encapsulating the PogoPlug API.Qi: Qi breathes life into your .NET projects by providing a collection of common helper methods and extensions so that you can get on with building your applicationSalesforce SSIS Transfer: This a project that leverages Salesforce.com's API (both Bulk and standard) to incrementally download a copy of your org's SF.com database.ServiceMon - Extensible Service Monitoring Utility: Standalone service monitoring tool which uses an extensible, scriptable plugin model to define monitoring actions with built-in support for HTTP GET Seven Up Seven Down: Seven Up Seven Down Game by Aditya Gupta Readme - How to play 1. Choose a bet amount 2. Select either 7up or 7down or 7 3. For 7up and 7down if you win yoSharePoint Import Data Timer: Custom timer job for Sharepoint 2010 which imports the results from SQL queries into Sharepoint lists.Smart Rabbit: M-Rabbit is Mihmojsos platform! Whit Smart Rabbit you can boot your Mihmojsos OS without restarting your computer!Sofire Suite: Sofire Suite ?????? 2009 ? 08 ??????????。????????????,???? V ??? Sofire2011(???????????????),???? Sofire.v1.5 ???。To be decided: summary testzwparking: zwparking

    Read the article

  • Android - Create a custom multi-line ListView bound to an ArrayList

    - by Bill Osuch
    The Android HelloListView tutorial shows how to bind a ListView to an array of string objects, but you'll probably outgrow that pretty quickly. This post will show you how to bind the ListView to an ArrayList of custom objects, as well as create a multi-line ListView. Let's say you have some sort of search functionality that returns a list of people, along with addresses and phone numbers. We're going to display that data in three formatted lines for each result, and make it clickable. First, create your new Android project, and create two layout files. Main.xml will probably already be created by default, so paste this in: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">  <TextView   android:layout_height="wrap_content"   android:text="Custom ListView Contents"   android:gravity="center_vertical|center_horizontal"   android:layout_width="fill_parent" />   <ListView    android:id="@+id/ListView01"    android:layout_height="wrap_content"    android:layout_width="fill_parent"/> </LinearLayout> Next, create a layout file called custom_row_view.xml. This layout will be the template for each individual row in the ListView. You can use pretty much any type of layout - Relative, Table, etc., but for this we'll just use Linear: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">   <TextView android:id="@+id/name"   android:textSize="14sp"   android:textStyle="bold"   android:textColor="#FFFF00"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/cityState"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/phone"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/> </LinearLayout> Now, add an object called SearchResults. Paste this code in: public class SearchResults {  private String name = "";  private String cityState = "";  private String phone = "";  public void setName(String name) {   this.name = name;  }  public String getName() {   return name;  }  public void setCityState(String cityState) {   this.cityState = cityState;  }  public String getCityState() {   return cityState;  }  public void setPhone(String phone) {   this.phone = phone;  }  public String getPhone() {   return phone;  } } This is the class that we'll be filling with our data, and loading into an ArrayList. Next, you'll need a custom adapter. This one just extends the BaseAdapter, but you could extend the ArrayAdapter if you prefer. public class MyCustomBaseAdapter extends BaseAdapter {  private static ArrayList<SearchResults> searchArrayList;    private LayoutInflater mInflater;  public MyCustomBaseAdapter(Context context, ArrayList<SearchResults> results) {   searchArrayList = results;   mInflater = LayoutInflater.from(context);  }  public int getCount() {   return searchArrayList.size();  }  public Object getItem(int position) {   return searchArrayList.get(position);  }  public long getItemId(int position) {   return position;  }  public View getView(int position, View convertView, ViewGroup parent) {   ViewHolder holder;   if (convertView == null) {    convertView = mInflater.inflate(R.layout.custom_row_view, null);    holder = new ViewHolder();    holder.txtName = (TextView) convertView.findViewById(R.id.name);    holder.txtCityState = (TextView) convertView.findViewById(R.id.cityState);    holder.txtPhone = (TextView) convertView.findViewById(R.id.phone);    convertView.setTag(holder);   } else {    holder = (ViewHolder) convertView.getTag();   }      holder.txtName.setText(searchArrayList.get(position).getName());   holder.txtCityState.setText(searchArrayList.get(position).getCityState());   holder.txtPhone.setText(searchArrayList.get(position).getPhone());   return convertView;  }  static class ViewHolder {   TextView txtName;   TextView txtCityState;   TextView txtPhone;  } } (This is basically the same as the List14.java API demo) Finally, we'll wire it all up in the main class file: public class CustomListView extends Activity {     @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);                 ArrayList<SearchResults> searchResults = GetSearchResults();                 final ListView lv1 = (ListView) findViewById(R.id.ListView01);         lv1.setAdapter(new MyCustomBaseAdapter(this, searchResults));                 lv1.setOnItemClickListener(new OnItemClickListener() {          @Override          public void onItemClick(AdapterView<?> a, View v, int position, long id) {           Object o = lv1.getItemAtPosition(position);           SearchResults fullObject = (SearchResults)o;           Toast.makeText(ListViewBlogPost.this, "You have chosen: " + " " + fullObject.getName(), Toast.LENGTH_LONG).show();          }          });     }         private ArrayList<SearchResults> GetSearchResults(){      ArrayList<SearchResults> results = new ArrayList<SearchResults>();            SearchResults sr1 = new SearchResults();      sr1.setName("John Smith");      sr1.setCityState("Dallas, TX");      sr1.setPhone("214-555-1234");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Jane Doe");      sr1.setCityState("Atlanta, GA");      sr1.setPhone("469-555-2587");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Steve Young");      sr1.setCityState("Miami, FL");      sr1.setPhone("305-555-7895");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Fred Jones");      sr1.setCityState("Las Vegas, NV");      sr1.setPhone("612-555-8214");      results.add(sr1);            return results;     } } Notice that we first get an ArrayList of SearchResults objects (normally this would be from an external data source...), pass it to the custom adapter, then set up a click listener. The listener gets the item that was clicked, converts it back to a SearchResults object, and does whatever it needs to do. Fire it up in the emulator, and you should wind up with something like this:

    Read the article

  • Running Solaris 11 as a control domain on a T2000

    - by jsavit
    There is increased adoption of Oracle Solaris 11, and many customers are deploying it on systems that previously ran Solaris 10. That includes older T1-processor based systems like T1000 and T2000. Even though they are old (from 2005) and don't have the performance of current SPARC servers, they are still functional, stable servers that customers continue to operate. One reason to install Solaris 11 on them is that older machines are attractive for testing OS upgrades before updating current, production systems. Normally this does not present a challenge, because Solaris 11 runs on any T-series or M-series SPARC server. One scenario adds a complication: running Solaris 11 in a control domain on a T1000 or T2000 hosting logical domains. Solaris 11 pre-installed Oracle VM Server for SPARC incompatible with T1 Unlike Solaris 10, Solaris 11 comes with Oracle VM Server for SPARC preinstalled. The ldomsmanager package contains the logical domains manager for Oracle VM Server for SPARC 2.2, which requires a SPARC T2, T2+, T3, or T4 server. It does not work with T1-processor systems, which are only supported by LDoms Manager 1.2 and earlier. The following screenshot shows what happens (bold font) if you try to use Oracle VM Server for SPARC 2.x commands in a Solaris 11 control domain. The commands were issued in a control domain on a T2000 that previously ran Solaris 10. We also display the version of the logical domains manager installed in Solaris 11: root@t2000 psrinfo -vp The physical processor has 4 virtual processors (0-3) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep T SUNW,Sun-Fire-T200 # ldm -V Failed to connect to logical domain manager: Connection refused # pkg info ldomsmanager Name: system/ldoms/ldomsmanager Summary: Logical Domains Manager Description: LDoms Manager - Virtualization for SPARC T-Series Category: System/Virtualization State: Installed Publisher: solaris Version: 2.2.0.0 Build Release: 5.11 Branch: 0.175.0.8.0.3.0 Packaging Date: May 25, 2012 10:20:48 PM Size: 2.86 MB FMRI: pkg://solaris/system/ldoms/[email protected],5.11-0.175.0.8.0.3.0:20120525T222048Z The 2.2 version of the logical domains manager will have to be removed, and 1.2 installed, in order to use this as a control domain. Preparing to change - create a new boot environment Before doing anything else, lets create a new boot environment: # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 2.14G static 2012-09-25 10:32 # beadm create solaris-1 # beadm activate solaris-1 # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris N / 4.82M static 2012-09-25 10:32 solaris-1 R - 2.14G static 2012-09-29 11:40 # init 0 Normally an init 6 to reboot would have been sufficient, but in the next step I reset the system anyway in order to put the system in factory default mode for a "clean" domain configuration. Preparing to change - reset to factory default There was a leftover domain configuration on the T2000, so I reset it to the factory install state. Since the ldm command is't working yet, it can't be done from the control domain, so I did it by logging onto to the service processor: $ ssh -X admin@t2000-sc Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Oracle Advanced Lights Out Manager CMT v1.7.9 Please login: admin Please Enter password: ******** sc> showhost Sun-Fire-T2000 System Firmware 6.7.10 2010/07/14 16:35 Host flash versions: OBP 4.30.4.b 2010/07/09 13:48 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 sc> bootmode config="factory-default" sc> poweroff Are you sure you want to power off the system [y/n]? y SC Alert: SC Request to Power Off Host. SC Alert: Host system has shut down. sc> poweron SC Alert: Host System has Reset At this point I rebooted into the new Solaris 11 boot environment, and Solaris commands showed it was running on the factory default configuration of a single domain owning all 32 CPUs and 32GB of RAM (that's what it looked like in 2005.) # psrinfo -vp The physical processor has 8 cores and 32 virtual processors (0-31) The core has 4 virtual processors (0-3) The core has 4 virtual processors (4-7) The core has 4 virtual processors (8-11) The core has 4 virtual processors (12-15) The core has 4 virtual processors (16-19) The core has 4 virtual processors (20-23) The core has 4 virtual processors (24-27) The core has 4 virtual processors (28-31) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep Mem Memory size: 32640 Megabytes Note that the older processor has 4 virtual CPUs per core, while current processors have 8 per core. Remove ldomsmanager 2.2 and install the 1.2 version The Solaris 11 pkg command is now used to remove the 2.2 version that shipped with Solaris 11: # pkg uninstall ldomsmanager Packages to remove: 1 Create boot environment: No Create backup boot environment: No Services to change: 2 PHASE ACTIONS Removal Phase 130/130 PHASE ITEMS Package State Update Phase 1/1 Package Cache Update Phase 1/1 Image State Update Phase 2/2 Finally, LDoms 1.2 installed via its install script, the same way it was done years ago: # unzip LDoms-1_2-Integration-10.zip # cd LDoms-1_2-Integration-10/Install/ # ./install-ldm Welcome to the LDoms installer. You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. ... ... normal install messages omitted ... The Solaris Security Toolkit applies to Solaris 10, and cannot be used in Solaris 11 (in which several things hardened by the Toolkit are already hardened by default), so answer b in the choice below: You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. Select a security profile from this list: a) Hardened Solaris configuration for LDoms (recommended) b) Standard Solaris configuration c) Your custom-defined Solaris security configuration profile Enter a, b, or c [a]: b ... other install messages omitted for brevity... After install I ensure that the necessary services are enabled, and verify the version of the installed LDoms Manager: # svcs ldmd STATE STIME FMRI online 22:00:36 svc:/ldoms/ldmd:default # svcs vntsd STATE STIME FMRI disabled Aug_19 svc:/ldoms/vntsd:default # ldm -V Logical Domain Manager (v 1.2-debug) Hypervisor control protocol v 1.3 Using Hypervisor MD v 1.1 System PROM: Hypervisor v. 1.7.3. @(#)Hypervisor 1.7.3.c 2010/07/09 15:14\015 OpenBoot v. 4.30.4. @(#)OBP 4.30.4.b 2010/07/09 13:48 Set up control domain and domain services At this point we have a functioning LDoms 1.2 environment that can be configured in the usual fashion. One difference is that LDoms 1.2 behavior had 'delayed configuration mode (as expected) during initial configuration before rebooting the control domain. Another minor difference with a Solaris 11 control domain is that you define virtual switches using the 'vanity name' of the network interface, rather than the hardware driver name as in Solaris 10. # ldm list ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Configuration and resource information is displayed for the configuration under construction; not the current active configuration. The configuration being constructed will only take effect after it is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- SP 32 32640M 3.2% 4d 2h 50m # ldm add-vdiskserver primary-vds0 primary # ldm add-vconscon port-range=5000-5100 primary-vcc0 primary # ldm add-vswitch net-dev=net0 primary-vsw0 primary # ldm set-mau 2 primary # ldm set-vcpu 8 primary # ldm set-memory 4g primary # ldm add-config initial # ldm list-spconfig factory-default initial [current] That's it, really. After reboot, we are ready to install guest domains. Summary - new wine in old bottles This example shows that (new) Solaris 11 can be installed on (old) T2000 servers and used as a control domain. The main activity is to remove the preinstalled Oracle VM Server for 2.2 and install Logical Domains 1.2 - the last version of LDoms to support T1-processor systems. I tested Solaris 10 and Solaris 11 guest domains running on this server and they worked without any surprises. This is a viable way to get further into Solaris 11 adoption, even on older T-series equipment.

    Read the article

  • class hierarchy design for small java project

    - by user523956
    I have written a java code which does following:- Main goal is to fetch emails from (inbox, spam) folders and store them in database. It fetches emails from gmail,gmx,web.de,yahoo and Hotmail. Following attributes are stored in mysql database. Slno, messagedigest, messageid, foldername, dateandtime, receiver, sender, subject, cc, size and emlfile. For gmail,gmy and web.de, I have used javamail API, because email form it can be fetched with IMAP. For yahoo and hotmail, I have used html parser and httpclient to fetch emails form spam folder and for inbox folder, I have used pop3 javamail API. I want to have proper class hierarchy which makes my code efficient and easily reusable. As of now I have designed class hierarchy as below: I am sure it can still be improved. So I would like to have different opinions on it. I have following classes and methods as of now. MainController:- Here I pass emailid, password and foldername from which emails have to be fetched. Abstract Class :-EmailProtocol Abstract Methods of it (All methods except executeParser contains method definition):- connectImap() // used by gmx,gmail and web.de email ids connectPop3() // used by hotmail and yahoo to fetch emails of inbox folder createMessageDigest // used by every email provider(gmx, gmail,web.de,yahoo,hotmail) establishDBConnection // used by every email emailAlreadyExists // used by every email which checks whether email already exists in db or not, if not then store it. storeemailproperties // used by every email to store emails properties to mysql database executeParser // nothing written in it. Overwridden and used by just hotmail and yahoo to fetch emails form spam folder. Imap extends EmailProtocol (nothing in it. But I have to have it to access methods of EmailProtocol. This is used to fetch emails from gmail,gmx and web.de) I know this is really a bad way but don't know how to do it other way. Hotmsil extends EmailProtocol Methods:- executeParser() :- This is used by just hotmail email id. fetchjunkemails() :- This is also very specific for only hotmail email id. Yahoo extends EmailProtocol Methods:- executeParser() storeEmailtotemptable() MoveEmailtoInbox() getFoldername() nullorEquals() All above methods are specific for yahoo email id. public DateTimeFormat(class) format() //this formats datetime of gmax,gmail and web.de emails. formatYahoodate //this formats datetime of yahoo email. formatHotmaildate // this formats datetime of hotmail email. public StringFormat ConvertStreamToString() // Accessed by every class except DateTimeFormat class. formatFromTo() // Accessed by every class except DateTimeFormat class. public Class CheckDatabaseExistance public static void checkForDatabaseTablesAvailability() (This method checks at the beginnning whether database and required tables exist in mysql or not. if not it creates them) Please see code of my MainController class so that You can have an idea about how I use different classes. public class MainController { public static void main(String[] args) throws Exception { ArrayList<String> web_de_folders = new ArrayList<String>(); web_de_folders.add("INBOX"); web_de_folders.add("Unbekannt"); web_de_folders.add("Spam"); web_de_folders.add("OUTBOX"); web_de_folders.add("SENT"); web_de_folders.add("DRAFTS"); web_de_folders.add("TRASH"); web_de_folders.add("Trash"); ArrayList<String> gmx_folders = new ArrayList<String>(); gmx_folders.add("INBOX"); gmx_folders.add("Archiv"); gmx_folders.add("Entwürfe"); gmx_folders.add("Gelöscht"); gmx_folders.add("Gesendet"); gmx_folders.add("Spamverdacht"); gmx_folders.add("Trash"); ArrayList<String> gmail_folders = new ArrayList<String>(); gmail_folders.add("Inbox"); gmail_folders.add("[Google Mail]/Spam"); gmail_folders.add("[Google Mail]/Trash"); gmail_folders.add("[Google Mail]/Sent Mail"); ArrayList<String> pop3_folders = new ArrayList<String>(); pop3_folders.add("INBOX"); CheckDatabaseExistance.checkForDatabaseTablesAvailability(); EmailProtocol imap = new Imap(); System.out.println("CHECKING FOR NEW EMAILS IN WEB.DE...(IMAP)"); System.out.println("*********************************************************************************"); imap.connectImap("[email protected]", "pwd", web_de_folders); System.out.println("\nCHECKING FOR NEW EMAILS IN GMX.DE...(IMAP)"); System.out.println("*********************************************************************************"); imap.connectImap("[email protected]", "pwd", gmx_folders); System.out.println("\nCHECKING FOR NEW EMAILS IN GMAIL...(IMAP)"); System.out.println("*********************************************************************************"); imap.connectImap("[email protected]", "pwd", gmail_folders); EmailProtocol yahoo = new Yahoo(); Yahoo y=new Yahoo(); System.out.println("\nEXECUTING YAHOO PARSER"); System.out.println("*********************************************************************************"); y.executeParser("http://de.mc1321.mail.yahoo.com/mc/welcome?ymv=0","[email protected]","pwd"); System.out.println("\nCHECKING FOR NEW EMAILS IN INBOX OF YAHOO (POP3)"); System.out.println("*********************************************************************************"); yahoo.connectPop3("[email protected]","pwd",pop3_folders); System.out.println("\nCHECKING FOR NEW EMAILS IN INBOX OF HOTMAIL (POP3)"); System.out.println("*********************************************************************************"); yahoo.connectPop3("[email protected]","pwd",pop3_folders); EmailProtocol hotmail = new Hotmail(); Hotmail h=new Hotmail(); System.out.println("\nEXECUTING HOTMAIL PARSER"); System.out.println("*********************************************************************************"); h.executeParser("https://login.live.com/ppsecure/post.srf","[email protected]","pwd"); } } I have kept DatetimeFormat and StringFormat class public so that I can access its public methods by just (DatetimeFormat.formatYahoodate for e.g. from different methods). This is the first time I have developed something in java. It serves its purpose but of course code is still not so efficient I think. I need your suggestions on this project.

    Read the article

  • Relay Access Denied (State 13) Postfix + Dovecot + Mysql

    - by Pierre Jeptha
    So we have been scratching our heads for quite some time over this relay issue that has presented itself since we re-built our mail-server after a failed Webmin update. We are running Debian Karmic with postfix 2.6.5 and Dovecot 1.1.11, sourcing from a Mysql database and authenticating with SASL2 and PAM. Here are the symptoms of our problem: 1) When users are on our local network they can send and receive 100% perfectly fine. 2) When users are off our local network and try to send to domains not of this mail server (ie. gmail) they get the "Relay Access Denied" error. However users can send to domains of this mail server when off the local network fine. 3) We host several virtual domains on this mailserver, the primary domain being airnet.ca. The rest of our virtual domains (ex. jeptha.ca) cannot receive email from domains not hosted by this mailserver (ie. gmail and such cannot send to them). They receive bounce backs of "Relay Access Denied (State 13)". This is regardless of whether they are on our local network or not, which is why it is so urgent for us to get this solved. Here is our main.cf from postfix: myhostname = mail.airnet.ca mydomain = airnet.ca smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no smtpd_sasl_type = dovecot queue_directory = /var/spool/postfix smtpd_sasl_path = private/auth smtpd_sender_restrictions = permit_mynetworks permit_sasl_authenticated smtp_sasl_auth_enable = yes smtpd_sasl_auth_enable = yes append_dot_mydomain = no readme_directory = no smtp_tls_security_level = may smtpd_tls_security_level = may smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_auth_only = no alias_maps = proxy:mysql:/etc/postfix/mysql/alias.cf hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = mail.airnet.ca, airnet.ca, localhost.$mydomain mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + local_recipient_maps = $alias_maps $virtual_mailbox_maps proxy:unix:passwd.byname home_mailbox = /var/virtual/ mail_spool_directory = /var/spool/mail mailbox_transport = maildrop smtpd_helo_required = yes disable_vrfy_command = yes smtpd_etrn_restrictions = reject smtpd_data_restrictions = reject_unauth_pipelining, permit show_user_unknown_table_name = no proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps $virtual_uid_maps $virtual_gid_maps virtual_alias_domains = message_size_limit = 20971520 transport_maps = proxy:mysql:/etc/postfix/mysql/vdomain.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql/vmailbox.cf virtual_alias_maps = proxy:mysql:/etc/postfix/mysql/alias.cf hash:/etc/mailman/aliases virtual_uid_maps = proxy:mysql:/etc/postfix/mysql/vuid.cf virtual_gid_maps = proxy:mysql:/etc/postfix/mysql/vgid.cf virtual_mailbox_base = / virtual_mailbox_limit = 209715200 virtual_mailbox_extended = yes virtual_create_maildirsize = yes virtual_mailbox_limit_maps = proxy:mysql:/etc/postfix/mysql/vmlimit.cf virtual_mailbox_limit_override = yes virtual_mailbox_limit_inbox = no virtual_overquote_bounce = yes virtual_minimum_uid = 1 maximal_queue_lifetime = 1d bounce_queue_lifetime = 4h delay_warning_time = 1h append_dot_mydomain = no qmgr_message_active_limit = 500 broken_sasl_auth_clients = yes smtpd_sasl_path = private/auth smtpd_sasl_local_domain = $myhostname smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes smtp_bind_address = 142.46.193.6 relay_domains = $mydestination mynetworks = 127.0.0.0, 142.46.193.0/25 inet_interfaces = all inet_protocols = all And here is the master.cf from postfix: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} spfpolicy unix - n n - - spawn user=nobody argv=/usr/bin/perl /usr/sbin/postfix-policyd-spf-perl smtp-amavis unix - - n - 4 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes #127.0.0.1:10025 inet n - n - - smtpd dovecot unix - n n - - pipe flags=DRhu user=dovecot:21pever1lcha0s argv=/usr/lib/dovecot/deliver -d ${recipient Here is Dovecot.conf protocols = imap imaps pop3 pop3s disable_plaintext_auth = no log_path = /etc/dovecot/logs/err info_log_path = /etc/dovecot/logs/info log_timestamp = "%Y-%m-%d %H:%M:%S ". syslog_facility = mail ssl_listen = 142.46.193.6 ssl_disable = no ssl_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem ssl_key_file = /etc/ssl/private/ssl-cert-snakeoil.key mail_location = mbox:~/mail:INBOX=/var/virtual/%d/mail/%u mail_privileged_group = mail mail_debug = yes protocol imap { login_executable = /usr/lib/dovecot/imap-login mail_executable = /usr/lib/dovecot/rawlog /usr/lib/dovecot/imap mail_executable = /usr/lib/dovecot/gdbhelper /usr/lib/dovecot/imap mail_executable = /usr/lib/dovecot/imap imap_max_line_length = 65536 mail_max_userip_connections = 20 mail_plugin_dir = /usr/lib/dovecot/modules/imap login_greeting_capability = yes } protocol pop3 { login_executable = /usr/lib/dovecot/pop3-login mail_executable = /usr/lib/dovecot/pop3 pop3_enable_last = no pop3_uidl_format = %08Xu%08Xv mail_max_userip_connections = 10 mail_plugin_dir = /usr/lib/dovecot/modules/pop3 } protocol managesieve { sieve=~/.dovecot.sieve sieve_storage=~/sieve } mail_plugin_dir = /usr/lib/dovecot/modules/lda auth_executable = /usr/lib/dovecot/dovecot-auth auth_process_size = 256 auth_cache_ttl = 3600 auth_cache_negative_ttl = 3600 auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@ auth_verbose = yes auth_debug = yes auth_debug_passwords = yes auth_worker_max_count = 60 auth_failure_delay = 2 auth default { mechanisms = plain login passdb sql { args = /etc/dovecot/dovecot-sql.conf } userdb sql { args = /etc/dovecot/dovecot-sql.conf } socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } master { path = /var/run/dovecot/auth-master mode = 0600 } } } Please, if you require anything do not hesistate, I will post it ASAP. Any help or suggestions are greatly appreciated! Thanks, Pierre

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

< Previous Page | 787 788 789 790 791 792 793 794 795 796 797 798  | Next Page >