Search Results

Search found 24935 results on 998 pages for 'test utilities'.

Page 955/998 | < Previous Page | 951 952 953 954 955 956 957 958 959 960 961 962  | Next Page >

  • CodePlex Daily Summary for Sunday, May 02, 2010

    CodePlex Daily Summary for Sunday, May 02, 2010New ProjectsAdventureWorks in Access: AdventureWorks database in Access format. Data has been ported in Access starting from Adventure Works database for SQL Server 2008.amplifi: This project is still under construction. We will add more information here as soon as it is available.ASP.NET MVC Bug Tracker: Bug Track written in C# ASP.NET MVC 2BigDecimal: BigDecimal is an attempt to create a number class that can have large precision. It is developed in vb.net (.net 4).CBM-Command: Coming soon....Chuyou: ChuyouCMinus: A C Minus Compiler!Complex and advanced mathematical functions: Mathematics toolkit is a Class Library Project which help Programmers to Calculate Mathematics Functions easily.Confuser: Confuser is a obfuscator for .NET. It is developed in C# and using Mono.Cecil for assembly manipulation.easypos: Micro punto de venta que permite ventas express de ropa, que se acopla fácil y transaparente con el ERP Click OneElmech Address Book: Web based Address Book for maintaining details of your business clients. This project targets Suppliers - Traders - Manufacturers - users. Applicat...Feed Viewer: Feed Viewer is able to synchronize subscribed feed and red news among all computers you are using. It understands both RSS and Atom format. It can ...Google URL Shortener, C#: Implementation in C# of generating short URLs by Goo.gl service (Google URL Shortener)MARS - Medical Assistant Record System: MARS - Medical Assistant Record SystemRx Contrib: Rx Contrib is a library which contain extensions for the Rx frameworkSimple Service Administration Tool: A simple tool to start/stop/restart a service of a WinNT based system. The tool is placed in the task bar as a notify icon, so the specified servic...Vis3D: Visual 3D controls for Silverlight.VisContent: XML content controls for ASP.NET.Windows Phone 7 database: This project implements a Isolated Storage (IsolatedStorage) based database for Windows Phone 7. The database consists of table object, each one s...New Releases$log$ / Keyword Substitution / Expansion Check-In Policy (TFS - LogSubstPol): LogSubstPol_v1.2010.0.4 (VS2010): LogSubstPol is a TFS check-in policy which insertes the check-in comments and other keywords into your source code, so you can keep track of the ch...Bojinx: Bojinx Core V4.5.1: The following new features were added: You can now use either BojinxMXMLContext or ContextModule to configure your application or module context. ...CBM-Command: Initial Public Demonstration: Initial public demonstration version. Can browse attached drives and display directory of any attached drive. A common question is "How does it w...Confuser: Confuser v1.0: It is the Confuser v1.0 that used to confuse the reverse-engineers :)Font Family Name Retrieval: 2nd Release: Added New MKV Font Extractor application to showcase the library. MKV Font Extractor depends on MKVToolnix to be installed before it will work. R...Google URL Shortener, C#: Goo.gl-CS v1 Beta: Extract the ZIP file to any location. Two files have to be in the same folder!HouseFly controls: HouseFly controls alpha 0.9.6.1: HouseFly controls release 0.9.6.1 alphaIsWiX: IsWiX 1.0.261.0: Build 1.0.261.0 - built against Fireworks 1.0.264.0. Adds support for VS2010 Integration to support WiX 3.5 beta releases.Managed Extensibility Framework (MEF) Contrib: MefContrib 0.9.2.0: Added conventions based catalog (read more at http://www.thecodejunkie.com/2010/03/bringing-convention-based-registration.html) MEF + Unity integ...MARS - Medical Assistant Record System: license: licenseNSIS Autorun: NSIS Autorun 0.1.5: This release includes source code, executable binary, files and example materials.PHP.net: Release 0.0.0.1: This is the first release of PHP.Net. The features available in this release are: new File Save File Save As Open File In the rar file is th...Rx Contrib: V1: Rx Contrib is ongoing effort for community additions for Rx. Current features are: ReactiveQueue: ISubject that does not loose values if there are ...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4 v1.0: - Added a margin for icon display. - Added the PopupMenuItem class which is a derivative of the DockPanel. - Find* methods can now drill down the v...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4 v1.1 Beta: - Added a margin for icon display. - Added the PopupMenuItem class which is a derivative of the DockPanel. - Added a AddSeperator method. - The Fin...Simple Service Administration Tool: SSATool 0.1.3: New Simple Service Administration Tool Version 0.1.3 compiled with Visual Studio .NET 2010.sMAPedit: sMAPedit v0.7a + Map-Pack: Required Additional Map-Pack Added: height setting by color picker (shift+leftclick)sMAPedit: sMAPedit v0.7b: Fixed: force a gargabe collection update to prevent pictureBox's memory leaksqwarea: Sqwarea 0.0.228.0 (alpha): This release corrects a critical bug in ConnexityNotifier service. We strongly recommend you to upgrade to this version. Known bugs : if you open...StackOverflow Desktop Client in C# and WPF: StackOverflow Client 0.1: Source code for the sample.TortoiseHg: TortoiseHg 1.0.2: This is a bug fix release, we recommend all users upgrade to 1.0.2VCC: Latest build, v2.1.30501.0: Automatic drop of latest buildVidCoder: 0.4.0: Changes: Added ability to queue up multiple video files or titles at once. These queued jobs will use the currently selected encoding settings. Mul...WabbitStudio Z80 Software Tools: Wabbitemu 32-bit Test Release: Wabbitemu Visual Studio build for testing purposesWindows Phone 7 database: Initial Release v1.0: This project implements a Isolated Storage (IsolatedStorage) based database for Windows Phone 7. The usage of this software is very simple. You cre...YouTubeEmbeddedVideo WebControl for ASP.NET: VideoControls version 1: This zip file contains the VideoControls.dll, version 1.Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active Projectspatterns & practices – Enterprise LibraryRawrIonics Isapi Rewrite FilterHydroServer - CUAHSI Hydrologic Information System Serverpatterns & practices: Azure Security GuidanceTinyProjectNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETDambach Linear Algebra FrameworkFacebook Developer Toolkit

    Read the article

  • SQL Monitor’s data repository

    - by Chris Lambrou
    As one of the developers of SQL Monitor, I often get requests passed on by our support people from customers who are looking to dip into SQL Monitor’s own data repository, in order to pull out bits of information that they’re interested in. Since there’s clearly interest out there in playing around directly with the data repository, I thought I’d write some blog posts to start to describe how it all works. The hardest part for me is knowing where to begin, since the schema of the data repository is pretty big. Hmmm… I guess it’s tricky for anyone to write anything but the most trivial of queries against the data repository without understanding the hierarchy of monitored objects, so perhaps my first post should start there. I always imagine that whenever a customer fires up SSMS and starts to explore their SQL Monitor data repository database, they become immediately bewildered by the schema – that was certainly my experience when I did so for the first time. The following query shows the number of different object types in the data repository schema: SELECT type_desc, COUNT(*) AS [count] FROM sys.objects GROUP BY type_desc ORDER BY type_desc;  type_desccount 1DEFAULT_CONSTRAINT63 2FOREIGN_KEY_CONSTRAINT181 3INTERNAL_TABLE3 4PRIMARY_KEY_CONSTRAINT190 5SERVICE_QUEUE3 6SQL_INLINE_TABLE_VALUED_FUNCTION381 7SQL_SCALAR_FUNCTION2 8SQL_STORED_PROCEDURE100 9SYSTEM_TABLE41 10UNIQUE_CONSTRAINT54 11USER_TABLE193 12VIEW124 With 193 tables, 124 views, 100 stored procedures and 381 table valued functions, that’s quite a hefty schema, and when you browse through it using SSMS, it can be a bit daunting at first. So, where to begin? Well, let’s narrow things down a bit and only look at the tables belonging to the data schema. That’s where all of the collected monitoring data is stored by SQL Monitor. The following query gives us the names of those tables: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' ORDER BY sch.name, obj.name; This query still returns 110 tables. I won’t show them all here, but let’s have a look at the first few of them:  name 1data.Cluster_Keys 2data.Cluster_Machine_ClockSkew_UnstableSamples 3data.Cluster_Machine_Cluster_StableSamples 4data.Cluster_Machine_Keys 5data.Cluster_Machine_LogicalDisk_Capacity_StableSamples 6data.Cluster_Machine_LogicalDisk_Keys 7data.Cluster_Machine_LogicalDisk_Sightings 8data.Cluster_Machine_LogicalDisk_UnstableSamples 9data.Cluster_Machine_LogicalDisk_Volume_StableSamples 10data.Cluster_Machine_Memory_Capacity_StableSamples 11data.Cluster_Machine_Memory_UnstableSamples 12data.Cluster_Machine_Network_Capacity_StableSamples 13data.Cluster_Machine_Network_Keys 14data.Cluster_Machine_Network_Sightings 15data.Cluster_Machine_Network_UnstableSamples 16data.Cluster_Machine_OperatingSystem_StableSamples 17data.Cluster_Machine_Ping_UnstableSamples 18data.Cluster_Machine_Process_Instances 19data.Cluster_Machine_Process_Keys 20data.Cluster_Machine_Process_Owner_Instances 21data.Cluster_Machine_Process_Sightings 22data.Cluster_Machine_Process_UnstableSamples 23… There are two things I want to draw your attention to: The table names describe a hierarchy of the different types of object that are monitored by SQL Monitor (e.g. clusters, machines and disks). For each object type in the hierarchy, there are multiple tables, ending in the suffixes _Keys, _Sightings, _StableSamples and _UnstableSamples. Not every object type has a table for every suffix, but the _Keys suffix is especially important and a _Keys table does indeed exist for every object type. In fact, if we limit the query to return only those tables ending in _Keys, we reveal the full object hierarchy: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' AND obj.name LIKE '%_Keys' ORDER BY sch.name, obj.name;  name 1data.Cluster_Keys 2data.Cluster_Machine_Keys 3data.Cluster_Machine_LogicalDisk_Keys 4data.Cluster_Machine_Network_Keys 5data.Cluster_Machine_Process_Keys 6data.Cluster_Machine_Services_Keys 7data.Cluster_ResourceGroup_Keys 8data.Cluster_ResourceGroup_Resource_Keys 9data.Cluster_SqlServer_Agent_Job_History_Keys 10data.Cluster_SqlServer_Agent_Job_Keys 11data.Cluster_SqlServer_Database_BackupType_Backup_Keys 12data.Cluster_SqlServer_Database_BackupType_Keys 13data.Cluster_SqlServer_Database_CustomMetric_Keys 14data.Cluster_SqlServer_Database_File_Keys 15data.Cluster_SqlServer_Database_Keys 16data.Cluster_SqlServer_Database_Table_Index_Keys 17data.Cluster_SqlServer_Database_Table_Keys 18data.Cluster_SqlServer_Error_Keys 19data.Cluster_SqlServer_Keys 20data.Cluster_SqlServer_Services_Keys 21data.Cluster_SqlServer_SqlProcess_Keys 22data.Cluster_SqlServer_TopQueries_Keys 23data.Cluster_SqlServer_Trace_Keys 24data.Group_Keys The full object type hierarchy looks like this: Cluster Machine LogicalDisk Network Process Services ResourceGroup Resource SqlServer Agent Job History Database BackupType Backup CustomMetric File Table Index Error Services SqlProcess TopQueries Trace Group Okay, but what about the individual objects themselves represented at each level in this hierarchy? Well that’s what the _Keys tables are for. This is probably best illustrated by way of a simple example – how can I query my own data repository to find the databases on my own PC for which monitoring data has been collected? Like this: SELECT clstr._Name AS cluster_name, srvr._Name AS instance_name, db._Name AS database_name FROM data.Cluster_SqlServer_Database_Keys db JOIN data.Cluster_SqlServer_Keys srvr ON db.ParentId = srvr.Id -- Note here how the parent of a Database is a Server JOIN data.Cluster_Keys clstr ON srvr.ParentId = clstr.Id -- Note here how the parent of a Server is a Cluster WHERE clstr._Name = 'dev-chrisl2' -- This is the hostname of my own PC ORDER BY clstr._Name, srvr._Name, db._Name;  cluster_nameinstance_namedatabase_name 1dev-chrisl2SqlMonitorData 2dev-chrisl2master 3dev-chrisl2model 4dev-chrisl2msdb 5dev-chrisl2mssqlsystemresource 6dev-chrisl2tempdb 7dev-chrisl2sql2005SqlMonitorData 8dev-chrisl2sql2005TestDatabase 9dev-chrisl2sql2005master 10dev-chrisl2sql2005model 11dev-chrisl2sql2005msdb 12dev-chrisl2sql2005mssqlsystemresource 13dev-chrisl2sql2005tempdb 14dev-chrisl2sql2008SqlMonitorData 15dev-chrisl2sql2008master 16dev-chrisl2sql2008model 17dev-chrisl2sql2008msdb 18dev-chrisl2sql2008mssqlsystemresource 19dev-chrisl2sql2008tempdb These results show that I have three SQL Server instances on my machine (a default instance, one named sql2005 and one named sql2008), and each instance has the usual set of system databases, along with a database named SqlMonitorData. Basically, this is where I test SQL Monitor on different versions of SQL Server, when I’m developing. There are a few important things we can learn from this query: Each _Keys table has a column named Id. This is the primary key. Each _Keys table has a column named ParentId. A foreign key relationship is defined between each _Keys table and its parent _Keys table in the hierarchy. There are two exceptions to this, Cluster_Keys and Group_Keys, because clusters and groups live at the root level of the object hierarchy. Each _Keys table has a column named _Name. This is used to uniquely identify objects in the table within the scope of the same shared parent object. Actually, that last item isn’t always true. In some cases, the _Name column is actually called something else. For example, the data.Cluster_Machine_Services_Keys table has a column named _ServiceName instead of _Name (sorry for the inconsistency). In other cases, a name isn’t sufficient to uniquely identify an object. For example, right now my PC has multiple processes running, all sharing the same name, Chrome (one for each tab open in my web-browser). In such cases, multiple columns are used to uniquely identify an object within the scope of the same shared parent object. Well, that’s it for now. I’ve given you enough information for you to explore the _Keys tables to see how objects are stored in your own data repositories. In a future post, I’ll try to explain how monitoring data is stored for each object, using the _StableSamples and _UnstableSamples tables. If you have any questions about this post, or suggestions for future posts, just submit them in the comments section below.

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • Can static methods be called using object/instance in .NET

    Ans is Yes and No   Yes in C++, Java and VB.NET No in C#   This is only compiler restriction in c#. You might see in some websites that we can break this restriction using reflection and delegates, but we can’t, according to my little research J I shall try to explain you…   Following is code sample to break this rule using reflection, it seems that it is possible to call a static method using an object, p1 using System; namespace T {     class Program     {         static void Main()         {             var p1 = new Person() { Name = "Smith" };             typeof(Person).GetMethod("TestStatMethod").Invoke(p1, new object[] { });                     }         class Person         {             public string Name { get; set; }             public static void TestStatMethod()             {                 Console.WriteLine("Hello");             }         }     } } but I do not think so this method is being called using p1 rather Type Name “Person”. I shall try to prove this… look at another example…  Test2 has been inherited from Test1. Let’s see various scenarios… Scenario1 using System; namespace T {     class Program     {         static void Main()         {             Test1 t = new Test1();            typeof(Test2).GetMethod("Method1").Invoke(t,                                  new object[] { });         }     }     class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }       class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } } Output:   At test1::Method2 Scenario2         static void Main()         {             Test2 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                                          new object[] { });         }   Output:   At test1::Method2   Scenario3         static void Main()         {             Test1 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                             new object[] { });         }   Output: At test1::Method2 In all above scenarios output is same, that means, Reflection also not considering the object what you pass to Invoke method in case of static methods. It is always considering the type which you specify in typeof(). So, what is the use passing instance to “Invoke”. Let see below sample using System; namespace T {     class Program     {         static void Main()         {            typeof(Test2).GetMethod("Method1").                Invoke(null, new object[] { });         }     }       class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }     class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } }   Output is   At test1::Method2   I was able to call Invoke “Method1” of Test2 without any object.  Yes, there no wonder here as Method1 is static. So we may conclude that static methods cannot be called using instances (only in c#) Why Microsoft has restricted it in C#? Ans: Really there Is no use calling static methods using objects because static methods are stateless. but still Java and C++ latest compilers allow calling static methods using instances. Java sample class Test {      public static void main(String str[])      {            Person p = new Person();            System.out.println(p.GetCount());      } }   class Person {   public static int GetCount()   {      return 100;   } }   Output          100 span.fullpost {display:none;}

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • Announcing ASP.NET MVC 3 (Release Candidate 2)

    - by ScottGu
    Earlier today the ASP.NET team shipped the final release candidate (RC2) for ASP.NET MVC 3.  You can download and install it here. Almost there… Today’s RC2 release is the near-final release of ASP.NET MVC 3, and is a true “release candidate” in that we are hoping to not make any more code changes with it.  We are publishing it today so that people can do final testing with it, let us know if they find any last minute “showstoppers”, and start updating their apps to use it.  We will officially ship the final ASP.NET MVC 3 “RTM” build in January. Works with both VS 2010 and VS 2010 SP1 Beta Today’s ASP.NET MVC 3 RC2 release works with both the shipping version of Visual Studio 2010 / Visual Web Developer 2010 Express, as well as the newly released VS 2010 SP1 Beta.  This means that you do not need to install VS 2010 SP1 (or the SP1 beta) in order to use ASP.NET MVC 3.  It works just fine with the shipping Visual Studio 2010.  I’ll do a blog post next week, though, about some of the nice additional feature goodies that come with VS 2010 SP1 (including IIS Express and SQL CE support within VS) which make the dev experience for both ASP.NET Web Forms and ASP.NET MVC even better. Bugs and Perf Fixes Today’s ASP.NET MVC 3 RC2 build contains many bug fixes and performance optimizations.  Our latest performance tests indicate that ASP.NET MVC 3 is now faster than ASP.NET MVC 2, and that existing ASP.NET MVC applications will experience a slight performance increase when updated to run using ASP.NET MVC 3. Final Tweaks and Fit-N-Finish In addition to bug fixes and performance optimizations, today’s RC2 build contains a number of last-minute feature tweaks and “fit-n-finish” changes for the new ASP.NET MVC 3 features.  The feedback and suggestions we’ve received during the public previews has been invaluable in guiding these final tweaks, and we really appreciate people’s support in sending this feedback our way.  Below is a short-list of some of the feature changes/tweaks made between last month’s ASP.NET MVC 3 RC release and today’s ASP.NET MVC 3 RC2 release: jQuery updates and addition of jQuery UI The default ASP.NET MVC 3 project templates have been updated to include jQuery 1.4.4 and jQuery Validation 1.7.  We are also excited to announce today that we are including jQuery UI within our default ASP.NET project templates going forward.  jQuery UI provides a powerful set of additional UI widgets and capabilities.  It will be added by default to your project’s \scripts folder when you create new ASP.NET MVC 3 projects. Improved View Scaffolding The T4 templates used for scaffolding views with the Add-View dialog now generates views that use Html.EditorFor instead of helpers such as Html.TextBoxFor. This change enables you to optionally annotate models with metadata (using data annotation attributes) to better customize the output of your UI at runtime. The Add View scaffolding also supports improved detection and usage of primary key information on models (including support for naming conventions like ID, ProductID, etc).  For example: the Add View dialog box uses this information to ensure that the primary key value is not scaffold as an editable form field, and that links between views are auto-generated correctly with primary key information. The default Edit and Create templates also now include references to the jQuery scripts needed for client validation.  Scaffold form views now support client-side validation by default (no extra steps required).  Client-side validation with ASP.NET MVC 3 is also done using an unobtrusive javascript approach – making pages fast and clean. [ControllerSessionState] –> [SessionState] ASP.NET MVC 3 adds support for session-less controllers.  With the initial RC you used a [ControllerSessionState] attribute to specify this.  We shortened this in RC2 to just be [SessionState]: Note that in addition to turning off session state, you can also set it to be read-only (which is useful for webfarm scenarios where you are reading but not updating session state on a particular request). [SkipRequestValidation] –> [AllowHtml] ASP.NET MVC includes built-in support to protect against HTML and Cross-Site Script Injection Attacks, and will throw an error by default if someone tries to post HTML content as input.  Developers need to explicitly indicate that this is allowed (and that they’ve hopefully built their app to securely support it) in order to enable it. With ASP.NET MVC 3, we are also now supporting a new attribute that you can apply to properties of models/viewmodels to indicate that HTML input is enabled, which enables much more granular protection in a DRY way.  In last month’s RC release this attribute was named [SkipRequestValidation].  With RC2 we renamed it to [AllowHtml] to make it more intuitive: Setting the above [AllowHtml] attribute on a model/viewmodel will cause ASP.NET MVC 3 to turn off HTML injection protection when model binding just that property. Html.Raw() helper method The new Razor view engine introduced with ASP.NET MVC 3 automatically HTML encodes output by default.  This helps provide an additional level of protection against HTML and Script injection attacks. With RC2 we are adding a Html.Raw() helper method that you can use to explicitly indicate that you do not want to HTML encode your output, and instead want to render the content “as-is”: ViewModel/View –> ViewBag ASP.NET MVC has (since V1) supported a ViewData[] dictionary within Controllers and Views that enables developers to pass information from a Controller to a View in a late-bound way.  This approach can be used instead of, or in combination with, a strongly-typed model class.  The below code demonstrates a common use case – where a strongly typed Product model is passed to the view in addition to two late-bound variables via the ViewData[] dictionary: With ASP.NET MVC 3 we are introducing a new API that takes advantage of the dynamic type support within .NET 4 to set/retrieve these values.  It allows you to use standard “dot” notation to specify any number of additional variables to be passed, and does not require that you create a strongly-typed class to do so.  With earlier previews of ASP.NET MVC 3 we exposed this API using a dynamic property called “ViewModel” on the Controller base class, and with a dynamic property called “View” within view templates.  A lot of people found the fact that there were two different names confusing, and several also said that using the name ViewModel was confusing in this context – since often you create strongly-typed ViewModel classes in ASP.NET MVC, and they do not use this API.  With RC2 we are exposing a dynamic property that has the same name – ViewBag – within both Controllers and Views.  It is a dynamic collection that allows you to pass additional bits of data from your controller to your view template to help generate a response.  Below is an example of how we could use it to pass a time-stamp message as well as a list of all categories to our view template: Below is an example of how our view template (which is strongly-typed to expect a Product class as its model) can use the two extra bits of information we passed in our ViewBag to generate the response.  In particular, notice how we are using the list of categories passed in the dynamic ViewBag collection to generate a dropdownlist of friendly category names to help set the CategoryID property of our Product object.  The above Controller/View combination will then generate an HTML response like below.    Output Caching Improvements ASP.NET MVC 3’s output caching system no longer requires you to specify a VaryByParam property when declaring an [OutputCache] attribute on a Controller action method.  MVC3 now automatically varies the output cached entries when you have explicit parameters on your action method – allowing you to cleanly enable output caching on actions using code like below: In addition to supporting full page output caching, ASP.NET MVC 3 also supports partial-page caching – which allows you to cache a region of output and re-use it across multiple requests or controllers.  The [OutputCache] behavior for partial-page caching was updated with RC2 so that sub-content cached entries are varied based on input parameters as opposed to the URL structure of the top-level request – which makes caching scenarios both easier and more powerful than the behavior in the previous RC. @model declaration does not add whitespace In earlier previews, the strongly-typed @model declaration at the top of a Razor view added a blank line to the rendered HTML output. This has been fixed so that the declaration does not introduce whitespace. Changed "Html.ValidationMessage" Method to Display the First Useful Error Message The behavior of the Html.ValidationMessage() helper was updated to show the first useful error message instead of simply displaying the first error. During model binding, the ModelState dictionary can be populated from multiple sources with error messages about the property, including from the model itself (if it implements IValidatableObject), from validation attributes applied to the property, and from exceptions thrown while the property is being accessed. When the Html.ValidationMessage() method displays a validation message, it now skips model-state entries that include an exception, because these are generally not intended for the end user. Instead, the method looks for the first validation message that is not associated with an exception and displays that message. If no such message is found, it defaults to a generic error message that is associated with the first exception. RemoteAttribute “Fields” -> “AdditionalFields” ASP.NET MVC 3 includes built-in remote validation support with its validation infrastructure.  This means that the client-side validation script library used by ASP.NET MVC 3 can automatically call back to controllers you expose on the server to determine whether an input element is indeed valid as the user is editing the form (allowing you to provide real-time validation updates). You can accomplish this by decorating a model/viewmodel property with a [Remote] attribute that specifies the controller/action that should be invoked to remotely validate it.  With the RC this attribute had a “Fields” property that could be used to specify additional input elements that should be sent from the client to the server to help with the validation logic.  To improve the clarity of what this property does we have renamed it to “AdditionalFields” with today’s RC2 release. ViewResult.Model and ViewResult.ViewBag Properties The ViewResult class now exposes both a “Model” and “ViewBag” property off of it.  This makes it easier to unit test Controllers that return views, and avoids you having to access the Model via the ViewResult.ViewData.Model property. Installation Notes You can download and install the ASP.NET MVC 3 RC2 build here.  It can be installed on top of the previous ASP.NET MVC 3 RC release (it should just replace the bits as part of its setup). The one component that will not be updated by the above setup (if you already have it installed) is the NuGet Package Manager.  If you already have NuGet installed, please go to the Visual Studio Extensions Manager (via the Tools –> Extensions menu option) and click on the “Updates” tab.  You should see NuGet listed there – please click the “Update” button next to it to have VS update the extension to today’s release. If you do not have NuGet installed (and did not install the ASP.NET MVC RC build), then NuGet will be installed as part of your ASP.NET MVC 3 setup, and you do not need to take any additional steps to make it work. Summary We are really close to the final ASP.NET MVC 3 release, and will deliver the final “RTM” build of it next month.  It has been only a little over 7 months since ASP.NET MVC 2 shipped, and I’m pretty amazed by the huge number of new features, improvements, and refinements that the team has been able to add with this release (Razor, Unobtrusive JavaScript, NuGet, Dependency Injection, Output Caching, and a lot, lot more).  I’ll be doing a number of blog posts over the next few weeks talking about many of them in more depth. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • What to Do When Windows Won’t Boot

    - by Chris Hoffman
    You turn on your computer one day and Windows refuses to boot — what do you do? “Windows won’t boot” is a common symptom with a variety of causes, so you’ll need to perform some troubleshooting. Modern versions of Windows are better at recovering from this sort of thing. Where Windows XP might have stopped in its tracks when faced with this problem, modern versions of Windows will try to automatically run Startup Repair. First Things First Be sure to think about changes you’ve made recently — did you recently install a new hardware driver, connect a new hardware component to your computer, or open your computer’s case and do something? It’s possible the hardware driver is buggy, the new hardware is incompatible, or that you accidentally unplugged something while working inside your computer. The Computer Won’t Power On At All If your computer won’t power on at all, ensure it’s plugged into a power outlet and that the power connector isn’t loose. If it’s a desktop PC, ensure the power switch on the back of its case — on the power supply — is set to the On position. If it still won’t power on at all, it’s possible you disconnected a power cable inside its case. If you haven’t been messing around inside the case, it’s possible the power supply is dead. In this case, you’ll have to get your computer’s hardware fixed or get a new computer. Be sure to check your computer monitor — if your computer seems to power on but your screen stays black, ensure your monitor is powered on and that the cable connecting it to your computer’s case is plugged in securely at both ends. The Computer Powers On And Says No Bootable Device If your computer is powering on but you get a black screen that says something like “no bootable device” or another sort of “disk error” message, your computer can’t seem to boot from the hard drive that Windows was installed on. Enter your computer’s BIOS or UEFI firmware setup screen and check its boot order setting, ensuring that it’s set to boot from its hard drive. If the hard drive doesn’t appear in the list at all, it’s possible your hard drive has failed and can no longer be booted from. In this case, you may want to insert Windows installation or recovery media and run the Startup Repair operation. This will attempt to make Windows bootable again. For example, if something overwrote your Windows drive’s boot sector, this will repair the boot sector. If the recovery environment won’t load or doesn’t see your hard drive, you likely have a hardware problem. Be sure to check your BIOS or UEFI’s boot order first if the recovery environment won’t load. You can also attempt to manually fix Windows boot loader problems using the fixmbr and fixboot commands. Modern versions of Windows should be able to fix this problem for you with the Startup Repair wizard, so you shouldn’t actually have to run these commands yourself. Windows Freezes or Crashes During Boot If Windows seems to start booting but fails partway through, you may be facing either a software or hardware problem. If it’s a software problem, you may be able to fix it by performing a Startup Repair operation. If you can’t do this from the boot menu, insert a Windows installation disc or recovery disk and use the startup repair tool from there. If this doesn’t help at all, you may want to reinstall Windows or perform a Refresh or Reset on Windows 8. If the computer encounters errors while attempting to perform startup repair or reinstall Windows, or the reinstall process works properly and you encounter the same errors afterwards, you likely have a hardware problem. Windows Starts and Blue Screens or Freezes If Windows crashes or blue-screens on you every time it boots, you may be facing a hardware or software problem. For example, malware or a buggy driver may be loading at boot and causing the crash, or your computer’s hardware may be malfunctioning. To test this, boot your Windows computer in safe mode. In safe mode, Windows won’t load typical hardware drivers or any software that starts automatically at startup. If the computer is stable in safe mode, try uninstalling any recently installed hardware drivers, performing a system restore, and scanning for malware. If you’re lucky, one of these steps may fix your software problem and allow you to boot Windows normally. If your problem isn’t fixed, try reinstalling Windows or performing a Refresh or Reset on Windows 8. This will reset your computer back to its clean, factory-default state. If you’re still experiencing crashes, your computer likely has a hardware problem. Recover Files When Windows Won’t Boot If you have important files that will be lost and want to back them up before reinstalling Windows, you can use a Windows installer disc or Linux live media to recover the files. These run entirely from a CD, DVD, or USB drive and allow you to copy your files to another external media, such as another USB stick or an external hard drive. If you’re incapable of booting a Windows installer disc or Linux live CD, you may need to go into your BIOS or UEFI and change the boot order setting. If even this doesn’t work — or if you can boot from the devices and your computer freezes or you can’t access your hard drive — you likely have a hardware problem. You can try pulling the computer’s hard drive, inserting it into another computer, and recovering your files that way. Following these steps should fix the vast majority of Windows boot issues — at least the ones that are actually fixable. The dark cloud that always hangs over such issues is the possibility that the hard drive or another component in the computer may be failing. Image Credit: Karl-Ludwig G. Poggemann on Flickr, Tzuhsun Hsu on Flickr     

    Read the article

  • A Look Back at 2010 Predictions

    - by David Dorf
    Now is the time of year people make their predictions for next year, but before I start thinking about 2011 it's worth a look back to see how my predictions for 2010 fared. 1. Borders and Blockbuster bite the dust. I would have never predicted a strong brand such as Circuit City could die, but now I know it can happen to anyone. Borders has lost the battle with Barnes & Noble and Blockbuster has lost to Netflix. And just to be sure, Amazon put an extra nail in each coffin. Borders received additional investment from Bennett LeBow to keep it afloat, but the stock is down around $1.25 with no profits in sight. Blockbuster filed for bankruptcy back in September. 2. Every retailer finally has a page on Facebook... but very few figure out how to keep fans engaged. Retailer postings become noise, and fans start to unsubscribe. Twitter goes in the same direction. A few standout retailers will figure out how to use social media, and the rest will remain dumbfounded. Most retailers are on the Facebook bandwagon, and their fan bases seem to be increasing thanks to promotions like The Gap's logo redesign, Lowes' black Friday sneak peak, and Walmart's Crowd Savers. There are several examples of f-commerce advancements, including some interesting integrations from Amazon.3. Smartphones consolidate and grow. More and more people will step-up to smartphones, most of which will choose iPhone, Blackberry, and Android phones. Other smartphones will vanish, and networks will start to strain. But retailers will finally embrace mobile as the next big channel. Retail marketing departments will build mobile apps without the help of their IT department, and eventually they will get into a bind. Android has been on a tear lately stealing market share from Blackberry. Palm and Microsoft are trending down, and Apple is holding steady. Smartphone sales are up 15% and expected to continue. Retailers understand the importance of mobile, and some innovative applications have been produced this year. 4. Google helps the little guys. Google will push its Favorite Places project to help give exposure to small retailers and restaurants. They will enable small retailers to act like big ones by providing storefronts, detailed product information, and coupons for consumers. Google will find a way to bring augmented reality to the masses. I can't say I've seen much new from Google regarding Favorite Places, but they've continued to push local product search. From the PC or smartphone, consumers can search for products and see which nearby stores have it stock. Oracle Retail even productized an integration to Google to support this effort. I suppose if Google ever buys Groupon then it will bring them even closer to local shopping. Google talked about augmented humanity, but that has nothing to do with augmented reality. 5. Steve Jobs Is Bugs Bunny and Steve Ballmer is Elmer Fudd. (OK, I stole that headline from an InformationWeek article. I couldn't resist.) Both Apple and Microsoft will continue to open new stores, but only Apple will show real growth. POSReady 2009 (formerly WEPOS) will continue to share the POS market with Linux. The iPhone and iPod will continue to capture market share, but there won't be an Apple tablet. There won't be an Apple tablet? What was I thinking? While Apple has well over 300 stores, there are less than 10 Microsoft stores. Initial impressions show that even though Microsoft is locating its store near Apple Stores, they are not converting customers, with shoppers citing a lack of assortment and high prices. 6. Consolidation of e-commerce software providers. Software vendors in the areas of search, reviews, online call-centers, payments, and e-commerce will consolidate, partly driven by the success of m-commerce and SaaS. Amazon will find someone else to buy, and eBay will continue to lose momentum. Consolidation of e-commerce providers continued with IBM acquiring Sterling Commerce and CoreMetrics, and Oracle recently announcing the acquisition of ATG. Amazon grabbed Zappos, Woot, and Diapers.com to continue its dominance of online selling. While eBay's Marketplace growth may have slowed, its PayPal division is doing quite well, fueled in part by demand for mobile payments. 7. Book publishers mirror music labels. Just as the iPod brought digital downloads to the masses, the Kindle and Nook will power the e-book revolution. Books will continue to use DRM for a few more years before following the path of music. Publishers will try to preserve the margins of hardbacks by associating e-book releases with paperbacks. Amazon has done a good job providing e-reader clients for smartphones, PCs, and tablets. Competition from Barnes & Noble has forced Amazon to support book loaning, and both companies are making it easier for people to publish ebooks (with or without DRM). Progress is slow but steady. 8. NFC makes inroads, RFID treads water. Near Field Communications start to appear in mobile phones, and retailers beta test its use for payments and loyalty programs. RFID tag costs come down a bit, but not enough to spur accelerated adoption.Nokia announced plans to offer NFC-enabled phones in 2011, and rumors are swirling about NFC in the upcoming iPhone.  I think NFC is heading in the right direction, and I've heard more interest from retailers about specialized uses for RFID.9. Digital Signage goes the way of augmented reality. People use their camera phones to leave geo-tagged notes all over cities, rating stores and restaurants, and "painting" graffiti. But people get tired of holding their phones in front of their faces, so AR glasses are offered in much the same way bluetooth headsets emerged. Retailers experiement with in-store advertising using AR. Several retailers like Pizza Hut, Benetton, and Target have experimented with AR but its still somewhat of a gimmick used by marketing.  I think this prediction is a year or two too early. 10. JDA flip-flops again. After announcing their embracing of the .Net architecture, then switching to J2EE after the Manugistics acquisition, JDA will finally decide to standardize on Apple's Objective C. Everything will be ported to the iPhone and be available on the AppStore. After all, there's not much left to try. This was, of course, a joke but the sentiment is still valid.  JDA seems more supply-chain focused than retail focused, which is a an outcrop if their i2 acquisition.  Of the 10 predictions, I'm going to say I got 6 somewhat correct.  (Don't you just love grading your own paper?)  Soon I'll post my predictions for 2011 so be on the lookout.  Until then here's one more prediction:  Va Tech beats Stanford in the Orange Bowl -- count on it!

    Read the article

  • Segmentation Fault (11) with modwsgi on CentOS 5.7 when running pyramid app

    - by carbotex
    I'm getting Segmentation fault error when trying to access the "Hello World" pyramid app. This error only occurs when running against CentOS 5.7 setup, but no problem whatsoever when tested against OSX and Arch Linux. Could it be a CentOS specific issue? [error] [client 10.211.55.2] Premature end of script headers: pyramid.wsgi [notice] child pid 31212 exit signal Segmentation fault (11) I have tried to follow the troubleshooting guides posted here http://code.google.com/p/modwsgi/wiki/InstallationIssues which suggests that it might caused by missing Shared Library. A quick check reveals that shared library is not the issue. [centos57@localhost modules]$ ldd mod_wsgi.so linux-gate.so.1 => (0x00e6a000) libpython2.7.so.1.0 => /home/python/lib/libpython2.7.so.1.0 (0x0024c000) libpthread.so.0 => /lib/libpthread.so.0 (0x00da8000) libdl.so.2 => /lib/libdl.so.2 (0x00cd6000) libutil.so.1 => /lib/libutil.so.1 (0x00110000) libm.so.6 => /lib/libm.so.6 (0x0085c000) libc.so.6 => /lib/libc.so.6 (0x00682000) /lib/ld-linux.so.2 (0x0012b000) Then I found another clue that might be able to solve my problem. Unfortunately libexpat is not the source of the problem. http://code.google.com/p/modwsgi/wiki/IssuesWithExpatLibrary [centos57@localhost bin]$ ldd ~/httpd/bin/httpd | grep expat libexpat.so.1 => /usr/local/lib/libexpat.so.1 (0x00b00000) [centos57@localhost bin]$ strings /usr/local/lib/libexpat.so.1 | grep expat libexpat.so.1 expat_2.0.1 [centos57@localhost bin]$ python Python 2.7.2 (default, Nov 26 2011, 08:08:44) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pyexpat >>> pyexpat.version_info (2, 0, 0) >>> I've been pulling my hair out trying to figure out what I'm missing in my setup. Why the problem only occurs with CentOS? Here is the detailed setup: Apache 2.2.19 Python 2.7.2 mod_wsgi-3.3 /home/httpd/conf/extra/pyramid.wsgi from pyramid.paster import get_app application = get_app('/home/homecamera/hcadmin/root/production.ini', 'main') /home/httpd/conf/extra/modwsgi.conf LoadModule wsgi_module modules/mod_wsgi.so WSGIScriptAlias /myapp /home/root/test.wsgi <Directory /home/root> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> # Use only 1 Python sub-interpreter. Multiple sub-interpreters # play badly with C extensions. WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess pyramid user=daemon group=daemon processes=1 \ threads=4 \ python-path=/home/python/lib/python2.7/site-packages WSGIScriptAlias /hello /home/httpd/conf/extra/pyramid.wsgi <Directory /home/httpd/conf/extra> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> Again this same setup works on OSX and Arch Linux but not on CentOS 5.7. Could someone out there point me to the right direction before I ran out of my hair. ==================================================================================== When apache started with gdb, I got a couple of warnings Reading symbols from /home/httpd/bin/httpd...done. Attaching to program: /home/httpd/bin/httpd, process 1821 warning: .dynamic section for "/lib/libcrypt.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libutil.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations gdb output. After hitting refresh button, to load pyramid. (gdb) cont Continuing. warning: .dynamic section for "/usr/lib/libgssapi_krb5.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/usr/lib/libkrb5.so.3" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libresolv.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x8edbb90 (LWP 1824)] 0x0814c120 in EVP_PKEY_CTX_dup () apache_error_log [info] mod_wsgi (pid=1821): Starting process 'pyramid' with threads=1. [info] mod_wsgi (pid=1821): Initializing Python. [info] mod_wsgi (pid=1821): Attach interpreter ''. [info] mod_wsgi (pid=1821): Create interpreter 'web.domain.com:20000|/hcadmin'. [info] [client 10.211.55.2] mod_wsgi (pid=1821, process='pyramid', application='web.domain.com:20000|/hcadmin'): Loading WSGI script '/home/httpd/conf/extra/pyramid.wsgi'. [error] hello 1

    Read the article

  • AdvancedFormatProvider: Making string.format do more

    - by plblum
    When I have an integer that I want to format within the String.Format() and ToString(format) methods, I’m always forgetting the format symbol to use with it. That’s probably because its not very intuitive. Use {0:N0} if you want it with group (thousands) separators. text = String.Format("{0:N0}", 1000); // returns "1,000"   int value1 = 1000; text = value1.ToString("N0"); Use {0:D} or {0:G} if you want it without group separators. text = String.Format("{0:D}", 1000); // returns "1000"   int value2 = 1000; text2 = value2.ToString("D"); The {0:D} is especially confusing because Microsoft gives the token the name “Decimal”. I thought it reasonable to have a new format symbol for String.Format, "I" for integer, and the ability to tell it whether it shows the group separators. Along the same lines, why not expand the format symbols for currency ({0:C}) and percent ({0:P}) to let you omit the currency or percent symbol, omit the group separator, and even to drop the decimal part when the value is equal to the whole number? My solution is an open source project called AdvancedFormatProvider, a group of classes that provide the new format symbols, continue to support the rest of the native symbols and makes it easy to plug in additional format symbols. Please visit https://github.com/plblum/AdvancedFormatProvider to learn about it in detail and explore how its implemented. The rest of this post will explore some of the concepts it takes to expand String.Format() and ToString(format). AdvancedFormatProvider benefits: Supports {0:I} token for integers. It offers the {0:I-,} option to omit the group separator. Supports {0:C} token with several options. {0:C-$} omits the currency symbol. {0:C-,} omits group separators, and {0:C-0} hides the decimal part when the value would show “.00”. For example, 1000.0 becomes “$1000” while 1000.12 becomes “$1000.12”. Supports {0:P} token with several options. {0:P-%} omits the percent symbol. {0:P-,} omits group separators, and {0:P-0} hides the decimal part when the value would show “.00”. For example, 1 becomes “100 %” while 1.1223 becomes “112.23 %”. Provides a plug in framework that lets you create new formatters to handle specific format symbols. You register them globally so you can just pass the AdvancedFormatProvider object into String.Format and ToString(format) without having to figure out which plug ins to add. text = String.Format(AdvancedFormatProvider.Current, "{0:I}", 1000); // returns "1,000" text2 = String.Format(AdvancedFormatProvider.Current, "{0:I-,}", 1000); // returns "1000" text3 = String.Format(AdvancedFormatProvider.Current, "{0:C-$-,}", 1000.0); // returns "1000.00" The IFormatProvider parameter Microsoft has made String.Format() and ToString(format) format expandable. They each take an additional parameter that takes an object that implements System.IFormatProvider. This interface has a single member, the GetFormat() method, which returns an object that knows how to convert the format symbol and value into the desired string. There are already a number of web-based resources to teach you about IFormatProvider and the companion interface ICustomFormatter. I’ll defer to them if you want to dig more into the topic. The only thing I want to point out is what I think are implementation considerations. Why GetFormat() always tests for ICustomFormatter When you see examples of implementing IFormatProviders, the GetFormat() method always tests the parameter against the ICustomFormatter type. Why is that? public object GetFormat(Type formatType) { if (formatType == typeof(ICustomFormatter)) return this; else return null; } The value of formatType is already predetermined by the .net framework. String.Format() uses the StringBuilder.AppendFormat() method to parse the string, extracting the tokens and calling GetFormat() with the ICustomFormatter type. (The .net framework also calls GetFormat() with the types of System.Globalization.NumberFormatInfo and System.Globalization.DateTimeFormatInfo but these are exclusive to how the System.Globalization.CultureInfo class handles its implementation of IFormatProvider.) Your code replaces instead of expands I would have expected the caller to pass in the format string to GetFormat() to allow your code to determine if it handles the request. My vision would be to return null when the format string is not supported. The caller would iterate through IFormatProviders until it finds one that handles the format string. Unfortunatley that is not the case. The reason you write GetFormat() as above is because the caller is expecting an object that handles all formatting cases. You are effectively supposed to write enough code in your formatter to handle your new cases and call .net functions (like String.Format() and ToString(format)) to handle the original cases. Its not hard to support the native functions from within your ICustomFormatter.Format function. Just test the format string to see if it applies to you. If not, call String.Format() with a token using the format passed in. public string Format(string format, object arg, IFormatProvider formatProvider) { if (format.StartsWith("I")) { // handle "I" formatter } else return String.Format(formatProvider, "{0:" + format + "}", arg); } Formatters are only used by explicit request Each time you write a custom formatter (implementer of ICustomFormatter), it is not used unless you explicitly passed an IFormatProvider object that supports your formatter into String.Format() or ToString(). This has several disadvantages: Suppose you have several ICustomFormatters. In order to have all available to String.Format() and ToString(format), you have to merge their code and create an IFormatProvider to return an instance of your new class. You have to remember to utilize the IFormatProvider parameter. Its easy to overlook, especially when you have existing code that calls String.Format() without using it. Some APIs may call String.Format() themselves. If those APIs do not offer an IFormatProvider parameter, your ICustomFormatter will not be available to them. The AdvancedFormatProvider solves the first two of these problems by providing a plug-in architecture.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Convert DVD to MP4 / H.264 with HD Decrypter and Handbrake

    - by DigitalGeekery
    Are you looking for a way to convert your DVD collection to high quality MP4 files? Today we are going to take a look at using DVDFab HD Decrypter along with Handbrake to convert DVDs to MP4 using the H.264 codec.  Process Overview Handbrake is a great file conversion application, but it unfortunately can’t handle DVD copy protection. For that we will use DVDFab’s HD Decrypter. HD Decrypter is the always free portion of the DVDFab application. What HD Decrypter will do, is remove the copy protection from your DVD, and copy the Video-TS and Audio-TS folders to your hard drive. Once the copy protection is gone, we will use Handbrake to convert the files to MP4 format with H.264 compression. Note: You’ll get full access to all the options in DVDFab  during the 30 trial period. However, the HD Decrypter is free and will continue to work. Ripping the DVD Install both Handbrake and DVDFab HD Decrypter. (Download links below) Once the applications are installed, place your DVD into your DVD drive and open DVDFab. On the welcome screen, click “Start DVDFab.”   You’ll be prompted to choose your region. Click “OK.” The disc is analyzed and opened… You’ll be brought to the main interface. Make sure you have the Full Disc option selected at the left panel and “Copy DVD-Video (VIDEO_TS folder) is selected. Click “Start.” Don’t be confused by the “DVD to DVD” option pop up. We won’t actually be burning to DVD. The HD Decrypter portion of the DVDFab suite is part of the DVD to DVD option. Click “OK.” The DVD will be ripped to your hard drive. When the copy process is complete, you’ll be prompted to insert media to start the write process. We aren’t going to be burning to disc, so just click Cancel then close out of DVDFab.   Converting to MP4 Now we are ready to convert Open Handbrake and click on the “Source” button at the top left. Select DVD / VIDEO_TS folder from the drop down list. Now we need to browse for the location where DVDFab HD Decrypter copied your movie. By default, that location will be the \DVDFab\Temp\FullDisc directory in your Documents folder. For example, in Windows 7, it would be: C:\Users\%username%\Documents\DVDFab\Temp\FullDisc\[Name of Your DVD] Select the folder, and click “OK.” You may be prompted to set a default path in Handbrake. This is an optional step. Click “OK.” If you’d like to set a default destination folder, Go to Tools on the top menu, select Options. On the General tab, click “Browse” to select a destination output folder. Click “Close” when Finished.   Next, click the dropdown list next to “Title.” Select the title that matches the length of the movie. It’s possible you may have see more than one title with a similar length. If so, consult the DVD information, or a site like IMDB.com, to find the proper movie title length. Select your container under Output Settings. This will be your final output file extension. We will be using MP4 for this example. You also have the option of MKV.   If you didn’t set up a default destination folder, you’ll need to select one by clicking the “Browse” button. You can manually customize the output file name and change the output file extension to .mp4 (Unless you prefer the iPod friendly .m4v extension). Settings There are a variety of custom settings that can be changed either through the tabs listed under Output Settings, or by selecting one of the Presets to the right. If converting exclusively for any of the devices listed in the preset list, simply click on that device and the settings will be automatically applied in the Output Settings tabs. For more Universal (non-Apple) devices or output, select the Normal profile.   For the most part, the presets will suit quite nicely. However, you can further customize settings if you’d like. The Picture tab allows you to tweak the size or cropping region. You must change Anamorphic to Loose or Custom to change the size.   The Video tab allows you to choose your codec. H.264 is the default. You also have the option to choose a target (output) size. The Constant Quality is recommended to be set between 59% – 63%. Anything over 70% will likely result in an output file larger than the input without any improved quality. On the Subtitles tab, you can select an available subtitle from the dropdown list and click “Add” to add it to the output file. When you’ve finished any customizations you are ready to begin the conversion process. Click “Start.” A Command window will open and you can follow the process. You’ll probably want to find something to do in the meantime as the process could take a couple of hours. When the process completes, you’re ready to watch your video.   Although it’s a time consuming process that involves a couple steps, this method will give you high quality H.264 video files. If you want to rip and burn your DVD’s to ISO check out our article on how to rip and convert DVD’s to an ISO image. Links Download DVDFab HD Decrypter (Part of the DVDFab suite) Download Handbrake Similar Articles Productive Geek Tips Enjoy Quick & Easy Unit Conversion with Convert for WindowsConvert Older Excel Documents to Excel 2007 FormatCalculate with Qalculate on LinuxHow To Convert Video Files to MP3 with VLCConvert a Row to a Column in Excel the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • CodePlex Daily Summary for Thursday, April 08, 2010

    CodePlex Daily Summary for Thursday, April 08, 2010New ProjectsBackUpAnyWhere: BackUpAnyWhereCustomFormbyEndUser: 在项目开发中,经常遇到不同的用户对同一报表有不同要求的情况,有时甚至用户需要从头生成一个报表,在以前可能使用第三方的开发工具来实现。在SQL Server2005中,通过使用Reporting Services可以使最终用户不通过编码,只要了解数据结构就能自行编辑报表。本例使用Adventur...DbExecutor - linq based database executor: IEnumerable based database reader. (linq like primitive sql executor)DeepZoomRenderingPack: A collection of libraries and plug-ins architecture that turns various files (like PDF, PS, etc.) into a "Visual" representation that the DeepZoom ...DotNetNuke Russian Language packs: DNNRussianLP - DotNetNuke Russian Language pack. F# Refactor: Deisgned to bring Code Refactoring capabilities to the F# Language in Visual Studio 2010. Invocando WebService e Site HTTP dinamicamente com HTTPWebRequest C#: Invocando Site HTTP e WebService dinamicamente com HTTPWebRequest Passando o SoapAction e Envelope XML Escrito em C# www.biztalkbrasil.c...Jitbit WYSWYG BBCode Editor: "Jitbit WYSIWYG-BBCode" is a browser-based JavaScript-powered WYSIWYG BBCode editorMRDS Services for Phidgets: MRDS (Microsoft Robotics Developer Studio) Services for Phidgets provides additional services for Phidgets sensors and controllers that are not inc...MSBuild Addin: This tool is a simple addin for VisualStudio 2008 used in association with Microsoft MSBuild. It allows you to run MSBuild directly inside Visual S...NISHIL-BizTalk Custom Eventlog Functiod: While testing our maps at times when it fails we cant trace it because we don’t know what the output of the functiods are. Normally in a single ma...Northest GNSG: Supinfo B3C Paris Northest University project. Galego, Neveu, Simon, Geissmann.Oily: Composite application project for oil parameters. It's developed in C#Outlook.Utility: The MSDN article Outlook Customization for Integrating with Enterprise Applications at http://msdn.microsoft.com/en-us/library/Aa479345 has quite a...Particle Plot Pivot: Scan select particle physics experiment web sites for plots and generate a Pivot display for easy browsing.project tca: project tca - translating chat application. Satisfyr: A new way of performing assertions on tests so that they remain agnostic to the underlying test framework, and leverage .NET built-in lambda syntax.sejce2008: jce se course wiki and projects linksSGB Controls: SGB Controls is a set of standard .net controls that include a number of enhancements to make life easier for the developer. These controls incl...Syringe: Syringe is a lightweight service container and dependency injection library designed for use with ASP.NET MVC2. Supported features: Dependency inj...topicbox: topicboxUr-Index: Ur-Index makes it a lot easier to create onomastic indexes for books in pdf format.VietGeeks ZohoDocApis: Implement .NET Zoho Document Apis library to help developer can intergrate Zoho Docs easy with their websitesWebometrics Dashboard: Webometrics Dashboardwebpress: It is a WebBased CMS and Blog platform.WPF Ink Canvas Toolbar: WPF Ink Canvas Toolbar makes it easy for WPF developers to use pen input in TabletPC or UMPC applications. The WPF InkCanvas control has drawing, e...WS-TMS: WS GISG HTT TMSNew ReleasesBatterySaver: Version 1.0: Fixed battery increase/decrease events not firing Fixed memory corruption error Added working set trimming (used very sparingly) Fixed poorly rende...Chargify.NET: Chargify.NET 0.65: Added in Transactions, Subscription Re-activation, and finally XML documentation (which has been missing in the previous releases).DbExecutor - linq based database executor: DbExecutor ver.1.0.0.1: renameDotNetNuke Russian Language packs: Russian Language Pack for DotNetNuke 04.09.02: Russian Language Pack for DotNetNuke 04.09.02Encrypted Notes: Encrypted Notes 1.6.3: This is the latest version of Encrypted Notes (1.6.3), with general improvements. It has an installer that will create a directory 'CPascoe' in My ...Invocando WebService e Site HTTP dinamicamente com HTTPWebRequest C#: Código projeto CallSiteHTTP: Código escrito em C#.NET 2.0 - VS2005 Contem: Solution completa(código e executável) XML de configuração - Config.xml ...Jitbit WYSWYG BBCode Editor: Main package: Contains the JS-file, CSS-file and a sample.Live Writer Picasa Plugin: Live Writer Picasa Plugin 1.1.0: Changelog Communication with Picasa Web Albums is done directly via HTTP now (v1.0.0 used Google's GData .NET Libraries) The plugin can search fo...MRDS Services for Phidgets: Phidgets for RDS 2008 R3: First Beta Release This ZIP file contains a web page called Readme_CodePlex.html that explains how to install the RDS Phidgets services for RDS 200...MSBuild Addin: MsBuildAddin-v1.0.0: Initial versionMSBuild Addin: MsBuildAddin-v1.0.0-src.zip: Initial versionOutlook.Utility: Outlook.Utility v1: I have used most of the code in previous projects and seems to be quite stable. Of course the point of open sourcing this is so this project is use...Scrum Dashboard: Scrum Dashboard v3 Alpha 1: Scrum Dashboard v3 is targeting .NET 4, TFS 2010 and the brand new Scrum for Team System v3 process templates. Most of the code has been rewritten ...SharePoint Labs: SPLab4004A-FRA-Level100: SPLab4004A-FRA-Level100 This SharePoint Lab will teach you the 4th best practice you should apply when writing code with the SharePoint API. Lab La...SharePoint Labs: SPLab5012A-FRA-Level100: SPLab5012A-FRA-Level100 This SharePoint Lab will teach you how to provision a new welcome page (how to change and rename the default.aspx page) on ...Shweet: SharePoint 2010 Team Messaging built with Pex: Shweet Source Code: Although the latest version pex and moles used with this project is not available, we thought it would be useful to provide a download to the source.Syringe: Syringe 1.0: Features Dependency injection on properties of services in container Dependency injection on constructors of services in container ASP.Net Mvc ...Text to HTML: 0.4.1.0: Cambios de la versiónOptimización del código de exportación reduciendo el código. Cambio en el icono de exportación. Añadido menú Seleccionar t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 23: Build 23 Fix: Executing "Blame" through the Solution Explorer on a file opens TortoiseMerge rather than TortoiseBlame. Build 22 (beta) New: Visua...WPF Ink Canvas Toolbar: WPF Ink Canvas Toolbar 1.0: First release - included custom colour selectionMost Popular ProjectsRawrWBFS ManagerMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsGraffiti CMSnopCommerce. Open Source online shop e-commerce solution.RawrShweet: SharePoint 2010 Team Messaging built with Pexpatterns & practices – Enterprise LibraryAcadsysAutoPocoIonics Isapi Rewrite FilterNcqrs Framework - The CQRS framework for .NETFarseer Physics Engine

    Read the article

  • 15 Kasim 2012 Oracle Day

    - by TUFEKCIOGLU,FATIH
       15.Kasim'da Harbiye Istanbul Kongre Merkezi'nde düzenlenecek Oracle Day'e ait etkinlik bilgileri : Oracle Day etkinlik bilgileri için tiklayiniz    En Son Teknolojiden Faydalanin: Inovasyona ve Rekabete Zaman Birakin 15 Kasim 2012 Bulut Bilisim, Mobilite, Sosyal Medya ve Büyük Veri, bildigimiz dünyayi yeniden tanimliyor. Bu teknolojileri kurumuna ilk getirenlerden biri olun; daha hizli yeni ürün ve hizmet gelistirme, müsteri deneyimini iyilestirme ve yeni inovatif is modellerini hayata geçirme firsati yakalayarak rekabetteki konumunuzu güçlendirin. Oracle ve is ortaklari bu noktada size, teknolojik yenilikleri kurumunuza uyarlamanizda yardim ederken, sizin de piyasadaki degisimlerden rakiplerinizden önce avantaj elde etmenizi saglar. Oracle'in, birlikte çalismak için tasarlanmis olan yazilim ve donanimlarda en yeni teknolojileri kullanarak, bilgi teknolojilerini nasil sadelestirdigini ögrenmek için Oracle Day'de bize katilin. Oracle Day'de: Oracle'in Bulut Bilisim, Büyük Veri, Sosyal Medya, is uygulamalari çözümleri hakkinda bilgi edinme, Basarili is dönüsümleri hakkinda örnek basari hikayelerini dinleme, Sizinle ayni zorluklari tecrübe eden sektör çalisanlariyla biraraya gelme, Oracle uzmanlari ve is ortaklari ile tanisma ve yeni ürün tanitimlarini izleme, Oracle OpenWorld'den en yeni ürün bilgilerini edinme firsatini kaçirmayin... Saygilarimizla, Oracle Türkiye Hemen Kaydolun! Platin Sponsor Istanbul Kongre Merkezi Taskisla Caddesi Harbiye 34367 Istanbul / Türkiye 15 Kasim 2012, Persembe 08:30 - 18:30 LCV: [email protected] oracle.com/oracleday Bizi takip edin: #oracleday   Oracle Is Ortagi Müsteri Basari Hikayesi TROUG Sunum Ingilizce'dir  Günün Ajandasi 08:45-09:30 Kayit 09:30-10:00 Hos Geldiniz Filiz Dogan, Genel Müdür, Oracle Türkiye 10:00-10:30 Navigating Complexity by Simplifying I.T. Andrew Mendelsohn, Kidemli Baskan Yardimcisi, Oracle Veritabani Sunucu Teknolojileri, Oracle           10:30-11:00 Dönüsümsel Bulut Yolculugu Ilker Kuruöz, CIO, Turkcell 11:00-11:20 Yeni Dönemde Veri Merkezlerinin Olmazsa Olmazlari Yalim Eristiren, Genel Müdür Yardimcisi, Intel 11:20-11:30 Slimfit Feyza Narli, Is Çözümleri Direktörü, Innova 11:30-12:00 Java ile Inovasyon Cuma Yigit, Teknik Mimar, Etiya Yusuf Tok, Java Grup Yöneticisi, OBSS Ersun Engel, Satis Müdürü, Oracle 12:00-12:10  Kahve Molasi       1. SALON 2. SALON 3. SALON 4. SALON 5. SALON 6. SALON 7. SALON 8. SALON 9. SALON 10. SALON   Müsteri Deneyimi: Çalisaninizi Yetkinlestirin. Markanizi Güçlendirin. Is Süreçlerinde Degisim Daha Fazla Veri, Daha Hizli Sonuç: Isinizi Analitik Çözümlerle Güçlendirin Peki Ama Nasil? Bulut Uygulayicilari için Çözüm Haritasi Is Uygulamalarinizda Inovasyonun Gücünü Kullanin Yeni Nesil Veri Merkezi ile BT'nin Gücünü Ortaya Çikartin Oracle & Is Ortaklari Çözümleri - Basari Hikayeleri I Oracle & Is Ortaklari Çözümleri - Basari Hikayeleri II Oracle Finansal Hizmetler - Core Banking and Analytical Solutions Oracle User Group (TROUG) 12:10-12:40 Müsteri Deneyimi: Çalisaninizi Yetkinlestirin. Markanizi Güçlendirin. Tekfen Ceyhan Çelik Fabrikasi Maliyet Yönetimi ve Üretim Takibi Daha Fazla Veriyle, Daha Hizli Hareket: Yaraticiligi Aksiyonla Güçlendirme Architect Your Cloud: A Blueprint for Cloud Builders Technology Strategies that Drive Business Excellence: Get Social. Be Mobile. Run Cloud. Yeni Nesil Veri Merkezi ile BT'nin Gücünü Ortaya Çikartin Innovate with Oracle - Virtual Banking and Self Service Channels What's Next for Oracle Database?   Oracle Innova Oracle Oracle Oracle Oracle Oracle Oracle 12:40-13:40 Ögle Yemegi 13:40-14:10 Uygulamalariniz Artik Bulutta 21. Yüzyilda Finans: Potansiyeli Kullanin - Sonuçlara Ulasin! "Düsünce Hizinda" Intel Islemcili Oracle Büyük Veri ve Is Analitigi Çözümleri Deniz Seviyesinden Bulutlara I CRM'inizi Sosyallestirin: Telaura Sosyal CRM Yeni Nesil Veri Merkezinde Trend: Sadelik Abone Bilgi Yönetim Sistemi (ABYS) Yüksek Oracle Veri Tabani Performansi - Oracle Veri Tabani Oracle Veri Depolama Sistemi ile Entegre Oldugunda Sigortacilikta Finansal Transformasyon ve Entegre Veri Ambari Çözümleri SQL/PLSQL Yeni Özellikler   Oracle Oracle Intel Oracle Etiya Oracle Inspirit Oracle Oraturk TROUG 14:10-14:20  Kahve Molasi 14:20-14:50 Satis ve Pazarlamada Sosyal Mecralar Müsteri Basari Hikayeleri Paneli: Degisim Yolculugu ve Sonuçlari Tukas'in Analitik Yolculugu Deniz Seviyesinden Bulutlara II Loupe: IP Tabanli Servisler için Proaktif Izleme Veri Merkezinizdeki Riskleri Ortadan Kaldirin Oracle iAS to WebLogic Migrasyonu Oracle ZFS Storage Appliance - Turkcell Deneyimleri Analytical Transformation - Risk and Finance Together to Address the Regulatory Changes Today and Tomorrow Veri Madenciligi Veritabaninda Yapilir: Uygulamalariyla Oracle R Enterprise ve Oracle Data Mining Opsiyonu   Oracle Akbank, Teknosa, Dogus Holding, Ceynak Gtech Oracle Netas Oracle OBSS Turkcell&Gantek Oracle TROUG 14:50-15:00  Kahve Molasi 15:00-15:30 Yetenek Yönetiminde Entegre Çözümler: Taleo ile Ise Alim Artik Daha Kolay Müsteri Basari Hikayeleri Paneli: Degisim Yolculugu ve Sonuçlari Büyük Veri & Exalytics - Exadata'nin Gelecek Rotasi - Bütünlesik (Engineered) Sistemler'de Ücretsiz Platin Hizmetleri Aksigorta Oracle ATS (Application Testing Suite) ile Uygulamalarini Nasil Test Ediyor? Üstün Performans ve Esneklik ile Servis Seviyenizi Arttirin Akilli Belediyecilik Uygulamalarinda Oracle BPM ile Süreç Yönetimi Teyp ile Uçtan Uca Yedekleme Çözümleri Connecting with Customers to Enhance Revenue Generation: Unleashing the Power of an Enterprise Revenue Management and Billing Solution Günümüzün Uygulama Mimarisi Sorunlari ve Çözüm Önerileri   Oracle Akbank, Teknosa, Dogus Holding, Ceynak Oracle Oracle Aksigorta Oracle Sampas Remivac Oracle TROUG 15:30-15:40  Kahve Molasi 15:40-16:10 JD Edwards Yeni Sürüm ile Satis Agi Yönetimi Oracle Policy Automation ile Türkçe Merkezi Is Kurallari Yönetimi Oracle BI ile Kurumsal Karne Çözümleri Turkcell Süperbulut ile Yazilim Artik Hizmetinizde Bütünlesik (Engineered) Sistemler Artik SAP Müsterilerinde de Fark Yaratiyor Yeni Nesil Veri Merkezi Olusturma: Denenmis ve Ispatlanmis Yöntemler Türk Telekom SOA Projesi Basari Hikayesi Yapi Kredi Sigorta Uygulamalarinda Son Kullanici Deneyimini Nasil Izliyor? 2013 NFC Trendleri Karagöz ile Hacivat Veri Tabaninda   TupperWare & Akademi Danismanlik Oracle Oracle Turkcell Oracle Oracle Türk Telekom Yapi Kredi Sigorta Smartsoft TROUG 16:10-16:20  Kahve Molasi 16:20-16:50 Tutarli, Tekil Veriye Yolculuk: Oracle Ana Veri Yönetimi Turkcell/Superonline'da Varlik Yönetimi Her Tür Verinin Endeca ile Rahat Analizi Bulut Bilisim'de Güvenlik Nasil Saglanir? Rekabette Kazanmak: GoldenGate ile Dogru Kararlari Rakiplerinizden Önce Verin. Veri Merkeziniz için Bulut Altyapi Stratejileri Oracle Orkestrasi: Çok Sesli Yönetime Kulak Verin Lojistik Zekasi / Horoz Lojistik Basari Hikayesi Bütünlesik Sistemler (Engineered Systems) ile Yüksek Performansli Java Uygulamalari International Growth - Helping the Banks Standardize Overseas Operations Oracle Big Data   Oracle Turkcell Oracle Oracle Oracle Oracle Kora & Horoz Lojistik Oracle Oracle TROUG 16:50-18:30  Kokteyl     Eger bir kamu kurumunun/kurulusunun çalisani veya görevlisi iseniz, bu etkinlige iliskin önemli etik kurallara iliskin bilgi için lütfen buraya tiklayiniz Copyright 2012, Oracle and/or its affiliates. All rights reserved. Bize Ulasin | Yasal Uyarilar | Gizlilik Beyani

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • MSCC: Scripting - Administrator's­ toolbox of magic...

    Finally, we made it to have our April meetup - in May. The most obvious explanation is the increased amount of open source and IT activities that either the MSCC, the Linux User Group of Mauritius (LUGM), or the University of Mauritius Student's Computer Club is organising. It's absolutely incredible to see the recent hype of events here on the island. And I'm loving it! Unfortunately, we also had to deal with arranging for a location this time. It was kind of an odyssey as my requests (and phone calls) haven't been answered, even though I tried it several times - well, kind of disappointing and I have to look into that for future gatherings. In my opinion, it is essential that two parameters of a community meeting are fixed as early as possible: Location, and Date and time You can't just change one or both on the very last minute. Well, this time we had to do it due to unforeseen reasons, and I apologise to any MSCC member which couldn't make it to our April meetup. Okay, lesson learned but now back to the actual meetup report ... Shortly after the meeting I placed the following statement as my first impression: "Spontaneous and improvised :) No, seriously, Ish and Dan had well prepared presentations on shell scripting, mainly focused towards Bourne Again Shell (bash), and the pros and cons of scripting versus actually writing something in a decent programming language. I thought that I could cut myself out of the equation but the demand for information about PowerShell was higher than expected..." Well, it turned out that the interest in Windows PowerShell was high, as I even got a couple of questions on it via social media networks during the evening. I also like to mention that the number of attendees went back to what I would call a "standard" number of participation. This time there were 12 craftsmen, but again a good number of First Timers. Reactions of other attendees Here are some impressions and feedback from our participants: "Enjoyed the bash and powershell (linux / windows) presentations ..." -- Nadim on event comments "He [Daniel] also showed us some syntax loopholes in Bash that could leave someone with bad code." -- Ish on MSCC – Let's talk about Scripting   Glad to see a couple of first time attendees, especially students from the university itself. Some details on the presentations MSCC: First time visit at the University of Mauritius - Phase II Engineering Tower, room 2.9 Gimme some love ... bash and other shells Ish gave a great introduction into shell scripting as he spoke about existing shell environments and a little bit about their history. Furthermore, he talked about various built-in commands, the use of coreutils, the ability to daisy-chain multiple commands using pipes, the importance of the standard I/O streams and their file descriptors in advanced scripting techniques. Combined with a couple of sample statements in the Linux terminal on Ubuntu 14.04 machine it was a solid presentation. Have a closer look at his slides - published on his blog on MSCC – Let's talk about Scripting. Oddities of scripting After the brief introduction into bash it was Daniel's turn to highlight a good number of oddities when working with shell scripts. First of all, it should be clear that scripting is not supposed for any kind of implementations in terms of software but simply to automate administrative procedures and to simplify routine jobs on a system. One of the cool oddities that he mentioned is that everything (!) in a shell is represented by strings; there are no other types like integer, float, date-time, etc. that you'd like to use in a full-fledged programming language. Let's have a look at his sample:  more to come... What's the output? As a conclusion, Daniel suggests that shell scripting should be limited but not restricted to automatic repetitive command stacks and batch jobs, startup wrapper for applications in order to set up the execution environment, and other not too sophisticated jobs. But as soon as it might involve a little bit more logic or you might rely on performance it's better to write an application in Ruby, Python, or Perl (among others of course). This is also enables the possibility to test your code properly. MSCC: Ish talking about Bourne Again Shell (bash) and shell scripting to automate regular tasks MSCC: Daniel gives an overview about the pros and cons of shell scripting versus programming MSCC: PowerShell as your scripting solution on Windows operating systems The path of the Enlightened is long ... and tough. Honestly, even though PowerShell was mentioned without any further details on the meetup's agenda, I didn't expect that there would be demand to give a presentation on Microsoft PowerShell after all. I already took this topic out of the announcement but the audience wanted to have some information. Okay, then let's see what I could do - improvised style. While my machine booted and got hooked up to the projector, I started to talk about the beginnings of PowerShell from back in 2006, and its predecessors MS DOS and Command Prompt. A throwback in history... always good for young people. As usual, Microsoft didn't get it at that time. Instead of listening to their client's needs and demands they ignored the feasibility to administrate Windows server farms without any UI tools. PowerShell is actually a result of this, and seeing that shell scripting is a common, reliable and fast way in an administrator's toolbox for decades, Microsoft had to adapt from their Microsoft Management Console (MMC) to a broader approach. It's not like shell scripting was something new; it is in daily use by alternative operating systems like AIX, HP UX, Solaris, and last but not least Linux. Most interestingly, Microsoft is very good at renovating existing architectures, and over the years PowerShell not only replaced their own combination of Command Prompt and Scripting Hosts (VBScript and CScript) but really turned into a challenging competitor on the market. The shell is easy to extend with cmdlets, and open to other Microsoft products like SQL Server, SharePoint, as well as Third-party software applications. Similar to MMC PowerShell also offers the ability to administer other machine remotely - only without a graphical user interface and therefore it's easier to automate and schedule regular tasks. Following is a sample of a PowerShell script file (extension .ps1): $strComputer = "." $colItems = get-wmiobject -class Win32_BIOS -namespace root\CIMV2 -comp $strComputer foreach ($objItem in $colItems) {write-host "BIOS Characteristics: " $objItem.BiosCharacteristicswrite-host "BIOS Version: " $objItem.BIOSVersionwrite-host "Build Number: " $objItem.BuildNumberwrite-host "Caption: " $objItem.Captionwrite-host "Code Set: " $objItem.CodeSetwrite-host "Current Language: " $objItem.CurrentLanguagewrite-host "Description: " $objItem.Descriptionwrite-host "Identification Code: " $objItem.IdentificationCodewrite-host "Installable Languages: " $objItem.InstallableLanguageswrite-host "Installation Date: " $objItem.InstallDatewrite-host "Language Edition: " $objItem.LanguageEditionwrite-host "List Of Languages: " $objItem.ListOfLanguageswrite-host "Manufacturer: " $objItem.Manufacturerwrite-host "Name: " $objItem.Namewrite-host "Other Target Operating System: " $objItem.OtherTargetOSwrite-host "Primary BIOS: " $objItem.PrimaryBIOSwrite-host "Release Date: " $objItem.ReleaseDatewrite-host "Serial Number: " $objItem.SerialNumberwrite-host "SMBIOS BIOS Version: " $objItem.SMBIOSBIOSVersionwrite-host "SMBIOS Major Version: " $objItem.SMBIOSMajorVersionwrite-host "SMBIOS Minor Version: " $objItem.SMBIOSMinorVersionwrite-host "SMBIOS Present: " $objItem.SMBIOSPresentwrite-host "Software Element ID: " $objItem.SoftwareElementIDwrite-host "Software Element State: " $objItem.SoftwareElementStatewrite-host "Status: " $objItem.Statuswrite-host "Target Operating System: " $objItem.TargetOperatingSystemwrite-host "Version: " $objItem.Versionwrite-host} Which gives you information about your BIOS and Windows OS. Then change the computer name to another one on your network (NetBIOS based) and run the script again. There lots of samples and tutorials at the Microsoft Script Center, and I would advise you to pay a visit over there if you are more interested in PowerShell. The Script Center provides the download links, too. Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Hacking Defence (14. May 2014) WebCup Maurice (7. & 8. June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! My resume of the day Spontaneous and improvised :) The new location at the University of Mauritius turned out very well, there is plenty of space, and it could be a good choice for future meetings. Especially, having the ability to get more and more students into our IT community sounds like a great opportunity. Later during the day, I got some promising mails from Nadim regarding future sessions at the local branch of the Middlesex University. Well, we will see in the future... But for now this will be on hold until approximately October when students resume their regular studies. Anyway, it was a good experience at the university, and thanks again to the UoM Student's Computer Club that made the necessary arrangements for the MSCC!

    Read the article

  • CodePlex Daily Summary for Monday, May 03, 2010

    CodePlex Daily Summary for Monday, May 03, 2010New Projects.radiko: エアログラス採用のシンプルなradiko(http://radiko.jp/)クライアントです。タスクトレイのアイコンからラジオ局の切り替えができます。7Scale: EmptyB2C MVC Plattform: The B2C MVC Plattform aims to be pluggable site framework to help small busisness accomplish basic tasks between business and customers.ElValWeb: The goal of the project to create full featured implementation of ModelValidatorProvider for Enterprise Library Application Validation Block, wich ...esatis yazilimi: asp.net yazılımı ile satış magazasi websitesi kur.IEnumerable.It sample code: IEnumerable.It sample codejQuery MicroAjax for ASP.NET: MicroAjax is a set of jQuery plugins and .NET components designed to provide simple, powerful and efficient Ajax centric web application design pat...Karbon VOS: Karbon VOS is an advanced Virtual Operating System Template for Visual Basic Express. It's developed in Visual Basic. Karbon VOS hopes to one day b...LINQ Mapper: LINQ Mapper translates simple LINQ queries between different sources. It allows you to write queries against your domain model, but have them run ...Meccano Silverlight Framework: Meccano is a new generation of frameworks for creation of LOB Silverlight applications based on MEF, RX, WCF, ADO.NET Data Services etc. It is inte...Multiuse Model View (MMV) Library: This project is an open source library for the Multiuse Model View (MMV) pattern for building robust WPF and ASP.Net applications. Visit my blog ht...Process Affinity Control: Process Affinity Control allows to set the affinity masks of processes based on rules.SilverSpatial: This project helps bridge the gap between Silverlight and Geo-Spatial data type (such as SQL Spatial). It implements the Well-Known-Binary (WKB) fo...StageAssets: Application for storing data about "things" and people in theatre. For example equipment, actors and so on.Stratosphere: Mono compatible library with set of primitives to work with scalable table, queue and block containers with corresponding implementations for Amazo...TRX Web-Viewer: A simple web-based application to upload and view VSTS 2008 and VSTS 2010 test result files with some basic lookup and feature-wise management of r...WDT2: WDT 2 is the school project to begin learning .NET enviroment, The main focus is on learning the use of almost all the componenets.WPF Behavior Library: WPF Behavior Library is a set of additional actions for WPF that allow you to add extra behaviors to a control quickly and easily. Currently the on...YouTubeEmbeddedVideo WebControl for ASP.NET: A Control to embed YouTube videos in ASP.NET pages. Works in C# and VB.NETNew Releases.radiko: beta: 東京局のみ対応 あとは手抜きActiveWorlds Managed .NET SDK: AwManaged Technology Preview - WIN32 (Alpha): This WIN32 release contains the Server Console Application. The Setup executable should be run as administrator on O.S. using UAC (Vista/Win7)AJAX Control Framework: v1.0.1.0: v1.0.1.0 - Contains a Bing Maps sample project, a number of bug fixes and a few performance improvements. - AJAX enable ANY custom control that der...App_Code (and Usercontrol) Editor (ACE): v1.0.0 alpha: The first alpha release of the AppCode Editor for Umbraco 4.0.3 is now available to download! Tested to work with usercontrols - pre-compilation wi...ElValWeb: ElValWeb 0.0.1.0: Version 0.0.1.0 contains client validation support forAndCompositeValidator ContainsCharactersValidator DomainValidator NotNullValidator Or...esatis yazilimi: magaza: magazanın yazılımları ve veri tabanının yazılımlarıGrunty OS: Grunty OS USB: Download Grunty OS for USBGrunty OS: Grunty OS.ISO: Grunty OS ISOKarbon VOS: Milestone 1 (Kaptua): Milestone 1...Live Meeting API Wrapper: LiveMeetingAPIWrapperV1.2: Added get meeting and update meeting.Multiuse Model View (MMV) Library: v0.3: first alpha release. Medium amount of functionality and some use cases tested.MVC Foolproof Validation: Beta 0.9.3774: Adds resource provided error messages, regular expression operators and a new RegularExpressionIf attribute.Process Affinity Control: Version 1.0.0: This is the first release. Planned features for the next release: No administrative privileges needed to run the manager Select the active scena...SharePoint 2010 Service Manager: SharePoint 2010 Service Manager 1.1: Added support to run under UAC with automatic security elevationSharePoint Event Handler Manager: Event Handler Manager 2.0: Please download the application here: http://www.ackermantech.com/registerevents.aspxSkyDrive Synchronizer: SkyDrive Sync Beta 0.1: Beta release includes: Upload and download Synchronize updated files Delete files on web/locally if not in source Split larger files into sma...Stratosphere: Stratosphere 1.0.0.0: Initial beta releaseSuggested Resources for .NET Developers: 0.8.0.0 VS2010 - focus on displaying content: This is the first release of Suggested Resources that can be downloaded from the internet. While there is still a lot of work to be done this rele...TRX Web-Viewer: TRX Web-Viewer V1.0: First working versionVCC: Latest build, v2.1.30502.0: Automatic drop of latest buildWatchersNET.TagCloud: WatchersNET.TagCloud 01.04.00: !Whats New New Tag Mode: Search Referrers (Shows Search Tags From Google, Ask, Bing, Yahoo and the Dnn Site Search) Taxonomy Tags now contains L...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 Beta 2e: Fully featured beta version of Visual WebGui Web/Cloud Applicaiton Development FrameworkWPF Behavior Library: WPF Behavior Library 0.1 Release: First alpha release of the WPF Behavior Library. It should be stable but doesn't have all of the features it will have in the future and the API ma...xvanneste: Sharepoint Social Network Client: Client permettant d'avoir accés au social network de sharepoint a l'exterieur du navigateur.Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active ProjectsIonics Isapi Rewrite Filterpatterns & practices – Enterprise LibraryRawrHydroServer - CUAHSI Hydrologic Information System ServerAJAX Control Frameworkpatterns & practices: Azure Security GuidanceTinyProjectBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleDambach Linear Algebra Framework

    Read the article

  • Using Radio Button in GridView with Validation

    - by Vincent Maverick Durano
    A developer is asking how to select one radio button at a time if the radio button is inside the GridView.  As you may know setting the group name attribute of radio button will not work if the radio button is located within a Data Representation control like GridView. This because the radio button inside the gridview bahaves differentely. Since a gridview is rendered as table element , at run time it will assign different "name" to each radio button. Hence you are able to select multiple rows. In this post I'm going to demonstrate how select one radio button at a time in gridview and add a simple validation on it. To get started let's go ahead and fire up visual studio and the create a new web application / website project. Add a WebForm and then add gridview. The mark up would look something like this: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:RadioButton ID="rb" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Col1" HeaderText="First Column" /> <asp:BoundField DataField="Col2" HeaderText="Second Column" /> </Columns> </asp:GridView> Noticed that I've added a templatefield column so that we can add the radio button there. Also I have set up some BoundField columns and set the DataFields as RowNumber, Col1 and Col2. These columns are just dummy columns and i used it for the simplicity of this example. Now where these columns came from? These columns are created by hand at the code behind file of the ASPX. Here's the code below: private DataTable FillData() { DataTable dt = new DataTable(); DataRow dr = null; //Create DataTable columns dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Col1", typeof(string))); dt.Columns.Add(new DataColumn("Col2", typeof(string))); //Create Row for each columns dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Col1"] = "AA"; dr["Col2"] = "BB"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); return dt; } And here's the code for binding the GridView with the dummy data above. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GridView1.DataSource = FillData(); GridView1.DataBind(); } } Okay we have now a GridView data with a radio button on each row. Now lets go ahead and switch back to ASPX mark up. In this example I'm going to use a JavaScript for validating the radio button to select one radio button at a time. Here's the javascript code below: function CheckOtherIsCheckedByGVID(rb) { var isChecked = rb.checked; var row = rb.parentNode.parentNode; if (isChecked) { row.style.backgroundColor = '#B6C4DE'; row.style.color = 'black'; } var currentRdbID = rb.id; parent = document.getElementById("<%= GridView1.ClientID %>"); var items = parent.getElementsByTagName('input'); for (i = 0; i < items.length; i++) { if (items[i].id != currentRdbID && items[i].type == "radio") { if (items[i].checked) { items[i].checked = false; items[i].parentNode.parentNode.style.backgroundColor = 'white'; items[i].parentNode.parentNode.style.color = '#696969'; } } } } The function above sets the row of the current selected radio button's style to determine that the row is selected and then loops through the radio buttons in the gridview and then de-select the previous selected radio button and set the row style back to its default. You can then call the javascript function above at onlick event of radio button like below: <asp:RadioButton ID="rb" runat="server" onclick="javascript:CheckOtherIsCheckedByGVID(this);" /> Here's the output below: On Load: After Selecting a Radio Button: As you have noticed, on initial load there's no default selected radio in the GridView. Now let's add a simple validation for that. We will basically display an error message if a user clicks a button that triggers a postback without selecting  a radio button in the GridView. Here's the javascript for the validation: function ValidateRadioButton(sender, args) { var gv = document.getElementById("<%= GridView1.ClientID %>"); var items = gv.getElementsByTagName('input'); for (var i = 0; i < items.length ; i++) { if (items[i].type == "radio") { if (items[i].checked) { args.IsValid = true; return; } else { args.IsValid = false; } } } } The function above loops through the rows in gridview and find all the radio buttons within it. It will then check each radio button checked property. If a radio is checked then set IsValid to true else set it to false.  The reason why I'm using IsValid is because I'm using the ASP validator control for validation. Now add the following mark up below under the GridView declaration: <br /> <asp:Label ID="lblMessage" runat="server" /> <br /> <asp:Button ID="btn" runat="server" Text="POST" onclick="btn_Click" ValidationGroup="GroupA" /> <asp:CustomValidator ID="CustomValidator1" runat="server" ErrorMessage="Please select row in the grid." ClientValidationFunction="ValidateRadioButton" ValidationGroup="GroupA" style="display:none"></asp:CustomValidator> <asp:ValidationSummary ID="ValidationSummary1" runat="server" ValidationGroup="GroupA" HeaderText="Error List:" DisplayMode="BulletList" ForeColor="Red" /> And then at Button Click event add this simple code below just to test if  the validation works: protected void btn_Click(object sender, EventArgs e) { lblMessage.Text = "Postback at: " + DateTime.Now.ToString("hh:mm:ss tt"); } Here's the output below that you can see in the browser:   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView

    Read the article

  • Java JRE 7 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Java Runtime Environment 7u10 (a.k.a. JRE 1.7.0_10 build 18) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 11i and 12 Windows-based desktop clients. What's needed to enable EBS environments for JRE 7? EBS customers should ensure that they are running JRE 7u10, at minimum, on Windows desktop clients. Of the compatibility issues identified with JRE 7, the most critical is an issue that prevents E-Business Suite Forms-based products from launching on Windows desktops that are running JRE 7.  Customers can prevent this issue -- and all other JRE 7 compatibility issues -- by ensuring that they have applied the latest certified patches documented for JRE 7 configurations to their EBS application tier servers.  These are summarized here for convenience. If the requirements change over time, please check the Notes for the authoritative list of patches: Apply Forms patch 14615390 to EBS 11i environments (Note 125767.1) Apply Forms patch 14614795 to EBS 12.0 and 12.1 environments (Note 437878.1) These patches are compatible with JRE 6 and 7, production ready, and fully-tested with the E-Business Suite.  These patches may be applied immediately to all E-Business Suite environments. All other Forms prerequisites documented in the Notes above should also be applied.  Where are the official patch requirements documented? All patches required for ensuring full compatibility of the E-Business Suite with JRE 7 are documented in these Notes: For EBS 11i: Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 11i (Note 290807.1) Upgrading Developer 6i with Oracle E-Business Suite 11i (Note 125767.1) For EBS 12 Deploying Sun JRE (Native Plug-in) for Windows Clients in Oracle E-Business Suite Release 12 (Note 393931.1) Upgrading OracleAS 10g Forms and Reports in Oracle E-Business Suite Release 12 (Note 437878.1) Prerequisites for 32-bit and 64-bit JRE certifications JRE 1.70_10 32-bit + EBS 11.5.10.2 Windows XP SP3 Windows Vista SP1 and SP2 Windows 7 and Windows 7 SP1  Forms 6.0.8.28.x patch 14615390 (Note 125767.1) JRE 1.70_10 32-bit + EBS 12.0 & 12.1 Windows XP SP3 Windows Vista SP1 and SP2 Windows 7 and Windows 7 SP1 Forms 10g overlay patch 14614795 (Note 437878.1) SSL Users:  10.1.0.5 version of Patch 6370967 applied to AS 10.1.3 with OPatch. Note: This fix is already included in the April 2011 AS 10.1.3.5 CPU patch and later. JRE 1.7.0_10 64-bit + EBS 11.5.10.2 Windows 7 (64-bit) and Windows 7 SP1 (64-bit) Forms 6.0.8.28.x patch 14615390 (Note 125767.1) JRE 1.70_10 64-bit + EBS 12.0 & 12.1 Windows 7 (64-bit) and Windows 7 SP1 (64-bit) Forms 10g overlay patch 14614795 (Note 437878.1) SSL Users:  10.1.0.5 version of Patch 6370967 applied to AS 10.1.3 with OPatch. Note: This fix is already included in the April 2011 AS 10.1.3.5 CPU patch and later.  EBS + Discoverer 11g Users JRE 1.7.0_10 (7u10) is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements: Discoverer (11g) 11.1.1.6 plus Patch 13877486 and later  Reference: How To Find Oracle BI Discoverer 10g and 11g Certification Information (Document 233047.1) Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases on the JRE 6 and 7 codelines.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22 on the JRE 6 codeline, and JRE 7u10 and later JRE 7 codeline updates. All JRE 6 and 7 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline, and from JRE 7u10 and later updates on the JRE 7 codeline.  We test all new JRE 1.6 and JRE 7 releases in parallel with the JRE development process, so all new JRE 1.6 and 7 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 or JRE 7 releases to your EBS users' desktops. Implications of Java 6 End of Public Updates for EBS Users The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates after February 2013. In other words, nothing will change for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6. These Java SE 6 updates will be made available to EBS users for the Extended Support periods documented in the Oracle Lifetime Support policy document for Oracle Applications (PDF): EBS 11i Extended Support ends November 2013 EBS 12.0 Extended Support ends January 2015 EBS 12.1 Extended Support ends December 2018 Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? No. This upgrade is highly recommended but currently remains optional. JRE 6 will be available to Windows users to run with EBS for the duration of your respective EBS Extended Support period.  Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 is highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290801.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What will Mac users need? Oracle will provide updates to JRE 7 for Mac OS X users. EBS users running Macs will need to upgrade to JRE 7 to receive JRE updates. The certification of Oracle E-Business Suite with JRE 7 for Mac-based desktop clients accessing EBS Forms-based content is underway. Mac users waiting for that certification may find this article useful: How to Reenable Apple Java 6 Plug-in for Mac EBS Users Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? No. This upgrade will be highly recommended but will be optional for EBS application tier servers running on Windows, Linux, and Solaris.  You can choose to remain on JDK 6 for the duration of your respective EBS Extended Support period.  If you remain on JDK 6, you will continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6. The certification of Oracle E-Business Suite with JDK 7 for EBS application tier servers on Windows, Linux, and Solaris as well as other platforms such as IBM AIX and HP-UX is planned.  Customers running platforms other than Windows, Linux, and Solaris should refer to their Java vendors's sites for more information about their support policies. References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • How to mount a blu-ray drive?

    - by Stephan Schielke
    Maybe it is for the best to close this question. This has nothing to do with a bluray drive in general anymore. Probably a hardware defect. I will try to test it with a windows system and different cables again... Thx so far. I have a bluray/dvd/cdrom drive with SATA. Ubuntu wont find it under /dev/sd wodim --devices wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/sg2' rwrw-- : 'HL-DT-ST' 'BDDVDRW CH08LS10' ------------------------------------------------------------------------- cdrecord -scanbus scsibus2: 2,0,0 200) 'HL-DT-ST' 'BDDVDRW CH08LS10' '2.00' Removable CD-ROM fdisk dont even lists it. Ubuntu only automounts blank DVDs, but neither CDROM nor Blurays. I also changed the sata slot, sata cable and the power cable. The drive works with a windows system. This happens when I try to mount: sudo mount -t auto /dev/scd0 /media/bluray mount: you must specify the filesystem type I tried all filesystems there are. I also installed makemkv. It finds the drive but not the disc. Here is my /dev ls -al /dev total 12 drwxr-xr-x 17 root root 4420 2011-11-25 19:36 . drwxr-xr-x 28 root root 4096 2011-11-25 07:12 .. crw------- 1 root root 10, 235 2011-11-25 19:28 autofs -rw-r--r-- 1 root root 630 2011-11-25 19:28 .blkid.tab -rw-r--r-- 1 root root 630 2011-11-25 19:28 .blkid.tab.old drwxr-xr-x 2 root root 700 2011-11-25 19:27 block drwxr-xr-x 2 root root 100 2011-11-25 19:27 bsg crw------- 1 root root 10, 234 2011-11-25 19:28 btrfs-control drwxr-xr-x 3 root root 60 2011-11-25 19:27 bus drwxr-xr-x 2 root root 3820 2011-11-25 19:28 char crw------- 1 root root 5, 1 2011-11-25 19:28 console lrwxrwxrwx 1 root root 11 2011-11-25 19:28 core -> /proc/kcore drwxr-xr-x 2 root root 60 2011-11-25 19:28 cpu crw------- 1 root root 10, 60 2011-11-25 19:28 cpu_dma_latency drwxr-xr-x 7 root root 140 2011-11-25 19:27 disk crw------- 1 root root 10, 61 2011-11-25 19:28 ecryptfs crw-rw---- 1 root video 29, 0 2011-11-25 19:28 fb0 lrwxrwxrwx 1 root root 13 2011-11-25 19:28 fd -> /proc/self/fd crw-rw-rw- 1 root root 1, 7 2011-11-25 19:28 full crw-rw-rw- 1 root fuse 10, 229 2011-11-25 19:28 fuse crw------- 1 root root 251, 0 2011-11-25 19:28 hidraw0 crw------- 1 root root 251, 1 2011-11-25 19:28 hidraw1 crw------- 1 root root 10, 228 2011-11-25 19:28 hpet lrwxrwxrwx 1 root root 14 2011-11-25 19:27 .initramfs -> /run/initramfs drwxr-xr-x 4 root root 220 2011-11-25 19:28 input crw------- 1 root root 1, 11 2011-11-25 19:28 kmsg srw-rw-rw- 1 root root 0 2011-11-25 19:28 log brw-rw---- 1 root disk 7, 0 2011-11-25 19:28 loop0 brw-rw---- 1 root disk 7, 1 2011-11-25 19:28 loop1 brw-rw---- 1 root disk 7, 2 2011-11-25 19:28 loop2 brw-rw---- 1 root disk 7, 3 2011-11-25 19:28 loop3 brw-rw---- 1 root disk 7, 4 2011-11-25 19:28 loop4 brw-rw---- 1 root disk 7, 5 2011-11-25 19:28 loop5 brw-rw---- 1 root disk 7, 6 2011-11-25 19:28 loop6 brw-rw---- 1 root disk 7, 7 2011-11-25 19:28 loop7 drwxr-xr-x 2 root root 60 2011-11-25 19:27 mapper crw------- 1 root root 10, 227 2011-11-25 19:28 mcelog crw-r----- 1 root kmem 1, 1 2011-11-25 19:28 mem drwxr-xr-x 2 root root 60 2011-11-25 19:27 net crw------- 1 root root 10, 59 2011-11-25 19:28 network_latency crw------- 1 root root 10, 58 2011-11-25 19:28 network_throughput crw-rw-rw- 1 root root 1, 3 2011-11-25 19:28 null crw-rw-rw- 1 root root 195, 0 2011-11-25 19:28 nvidia0 crw-rw-rw- 1 root root 195, 255 2011-11-25 19:28 nvidiactl crw------- 1 root root 1, 12 2011-11-25 19:28 oldmem crw-r----- 1 root kmem 1, 4 2011-11-25 19:28 port crw------- 1 root root 108, 0 2011-11-25 19:28 ppp crw------- 1 root root 10, 1 2011-11-25 19:28 psaux crw-rw-rw- 1 root tty 5, 2 2011-11-25 20:00 ptmx drwxr-xr-x 2 root root 0 2011-11-25 19:27 pts brw-rw---- 1 root disk 1, 0 2011-11-25 19:28 ram0 brw-rw---- 1 root disk 1, 1 2011-11-25 19:28 ram1 brw-rw---- 1 root disk 1, 10 2011-11-25 19:28 ram10 brw-rw---- 1 root disk 1, 11 2011-11-25 19:28 ram11 brw-rw---- 1 root disk 1, 12 2011-11-25 19:28 ram12 brw-rw---- 1 root disk 1, 13 2011-11-25 19:28 ram13 brw-rw---- 1 root disk 1, 14 2011-11-25 19:28 ram14 brw-rw---- 1 root disk 1, 15 2011-11-25 19:28 ram15 brw-rw---- 1 root disk 1, 2 2011-11-25 19:28 ram2 brw-rw---- 1 root disk 1, 3 2011-11-25 19:28 ram3 brw-rw---- 1 root disk 1, 4 2011-11-25 19:28 ram4 brw-rw---- 1 root disk 1, 5 2011-11-25 19:28 ram5 brw-rw---- 1 root disk 1, 6 2011-11-25 19:28 ram6 brw-rw---- 1 root disk 1, 7 2011-11-25 19:28 ram7 brw-rw---- 1 root disk 1, 8 2011-11-25 19:28 ram8 brw-rw---- 1 root disk 1, 9 2011-11-25 19:28 ram9 crw-rw-rw- 1 root root 1, 8 2011-11-25 19:28 random crw-rw-r--+ 1 root root 10, 62 2011-11-25 19:28 rfkill lrwxrwxrwx 1 root root 4 2011-11-25 19:28 rtc -> rtc0 crw------- 1 root root 254, 0 2011-11-25 19:28 rtc0 lrwxrwxrwx 1 root root 3 2011-11-25 19:38 scd0 -> sr0 brw-rw---- 1 root disk 8, 0 2011-11-25 19:28 sda brw-rw---- 1 root disk 8, 1 2011-11-25 19:28 sda1 brw-rw---- 1 root disk 8, 2 2011-11-25 19:28 sda2 brw-rw---- 1 root disk 8, 3 2011-11-25 19:28 sda3 brw-rw---- 1 root disk 8, 5 2011-11-25 19:28 sda5 brw-rw---- 1 root disk 8, 6 2011-11-25 19:28 sda6 brw-rw---- 1 root disk 8, 16 2011-11-25 19:28 sdb brw-rw---- 1 root disk 8, 17 2011-11-25 19:28 sdb1 crw-rw---- 1 root disk 21, 0 2011-11-25 19:28 sg0 crw-rw---- 1 root disk 21, 1 2011-11-25 19:28 sg1 crw-rw----+ 1 root cdrom 21, 2 2011-11-25 19:28 sg2 lrwxrwxrwx 1 root root 8 2011-11-25 19:28 shm -> /run/shm crw------- 1 root root 10, 231 2011-11-25 19:28 snapshot drwxr-xr-x 4 root root 280 2011-11-25 19:28 snd brw-rw----+ 1 root cdrom 11, 0 2011-11-25 19:38 sr0 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stderr -> /proc/self/fd/2 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stdin -> /proc/self/fd/0 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stdout -> /proc/self/fd/1 crw-rw-rw- 1 root tty 5, 0 2011-11-25 19:35 tty crw--w---- 1 root tty 4, 0 2011-11-25 19:28 tty0 crw------- 1 root root 4, 1 2011-11-25 19:28 tty1 crw--w---- 1 root tty 4, 10 2011-11-25 19:28 tty10 crw--w---- 1 root tty 4, 11 2011-11-25 19:28 tty11 crw--w---- 1 root tty 4, 12 2011-11-25 19:28 tty12 crw--w---- 1 root tty 4, 13 2011-11-25 19:28 tty13 crw--w---- 1 root tty 4, 14 2011-11-25 19:28 tty14 crw--w---- 1 root tty 4, 15 2011-11-25 19:28 tty15 crw--w---- 1 root tty 4, 16 2011-11-25 19:28 tty16 crw--w---- 1 root tty 4, 17 2011-11-25 19:28 tty17 crw--w---- 1 root tty 4, 18 2011-11-25 19:28 tty18 crw--w---- 1 root tty 4, 19 2011-11-25 19:28 tty19 crw------- 1 root root 4, 2 2011-11-25 19:28 tty2 crw--w---- 1 root tty 4, 20 2011-11-25 19:28 tty20 crw--w---- 1 root tty 4, 21 2011-11-25 19:28 tty21 crw--w---- 1 root tty 4, 22 2011-11-25 19:28 tty22 crw--w---- 1 root tty 4, 23 2011-11-25 19:28 tty23 crw--w---- 1 root tty 4, 24 2011-11-25 19:28 tty24 crw--w---- 1 root tty 4, 25 2011-11-25 19:28 tty25 crw--w---- 1 root tty 4, 26 2011-11-25 19:28 tty26 crw--w---- 1 root tty 4, 27 2011-11-25 19:28 tty27 crw--w---- 1 root tty 4, 28 2011-11-25 19:28 tty28 crw--w---- 1 root tty 4, 29 2011-11-25 19:28 tty29 crw------- 1 root root 4, 3 2011-11-25 19:28 tty3 crw--w---- 1 root tty 4, 30 2011-11-25 19:28 tty30 crw--w---- 1 root tty 4, 31 2011-11-25 19:28 tty31 crw--w---- 1 root tty 4, 32 2011-11-25 19:28 tty32 crw--w---- 1 root tty 4, 33 2011-11-25 19:28 tty33 crw--w---- 1 root tty 4, 34 2011-11-25 19:28 tty34 crw--w---- 1 root tty 4, 35 2011-11-25 19:28 tty35 crw--w---- 1 root tty 4, 36 2011-11-25 19:28 tty36 crw--w---- 1 root tty 4, 37 2011-11-25 19:28 tty37 crw--w---- 1 root tty 4, 38 2011-11-25 19:28 tty38 crw--w---- 1 root tty 4, 39 2011-11-25 19:28 tty39 crw------- 1 root root 4, 4 2011-11-25 19:28 tty4 crw--w---- 1 root tty 4, 40 2011-11-25 19:28 tty40 crw--w---- 1 root tty 4, 41 2011-11-25 19:28 tty41 crw--w---- 1 root tty 4, 42 2011-11-25 19:28 tty42 crw--w---- 1 root tty 4, 43 2011-11-25 19:28 tty43 crw--w---- 1 root tty 4, 44 2011-11-25 19:28 tty44 crw--w---- 1 root tty 4, 45 2011-11-25 19:28 tty45 crw--w---- 1 root tty 4, 46 2011-11-25 19:28 tty46 crw--w---- 1 root tty 4, 47 2011-11-25 19:28 tty47 crw--w---- 1 root tty 4, 48 2011-11-25 19:28 tty48 crw--w---- 1 root tty 4, 49 2011-11-25 19:28 tty49 crw------- 1 root root 4, 5 2011-11-25 19:28 tty5 crw--w---- 1 root tty 4, 50 2011-11-25 19:28 tty50 crw--w---- 1 root tty 4, 51 2011-11-25 19:28 tty51 crw--w---- 1 root tty 4, 52 2011-11-25 19:28 tty52 crw--w---- 1 root tty 4, 53 2011-11-25 19:28 tty53 crw--w---- 1 root tty 4, 54 2011-11-25 19:28 tty54 crw--w---- 1 root tty 4, 55 2011-11-25 19:28 tty55 crw--w---- 1 root tty 4, 56 2011-11-25 19:28 tty56 crw--w---- 1 root tty 4, 57 2011-11-25 19:28 tty57 crw--w---- 1 root tty 4, 58 2011-11-25 19:28 tty58 crw--w---- 1 root tty 4, 59 2011-11-25 19:28 tty59 crw------- 1 root root 4, 6 2011-11-25 19:28 tty6 crw--w---- 1 root tty 4, 60 2011-11-25 19:28 tty60 crw--w---- 1 root tty 4, 61 2011-11-25 19:28 tty61 crw--w---- 1 root tty 4, 62 2011-11-25 19:28 tty62 crw--w---- 1 root tty 4, 63 2011-11-25 19:28 tty63 crw--w---- 1 root tty 4, 7 2011-11-25 19:28 tty7 crw--w---- 1 root tty 4, 8 2011-11-25 19:28 tty8 crw--w---- 1 root tty 4, 9 2011-11-25 19:28 tty9 crw------- 1 root root 5, 3 2011-11-25 19:28 ttyprintk crw-rw---- 1 root dialout 4, 64 2011-11-25 19:28 ttyS0 crw-rw---- 1 root dialout 4, 65 2011-11-25 19:28 ttyS1 crw-rw---- 1 root dialout 4, 74 2011-11-25 19:28 ttyS10 crw-rw---- 1 root dialout 4, 75 2011-11-25 19:28 ttyS11 crw-rw---- 1 root dialout 4, 76 2011-11-25 19:28 ttyS12 crw-rw---- 1 root dialout 4, 77 2011-11-25 19:28 ttyS13 crw-rw---- 1 root dialout 4, 78 2011-11-25 19:28 ttyS14 crw-rw---- 1 root dialout 4, 79 2011-11-25 19:28 ttyS15 crw-rw---- 1 root dialout 4, 80 2011-11-25 19:28 ttyS16 crw-rw---- 1 root dialout 4, 81 2011-11-25 19:28 ttyS17 crw-rw---- 1 root dialout 4, 82 2011-11-25 19:28 ttyS18 crw-rw---- 1 root dialout 4, 83 2011-11-25 19:28 ttyS19 crw-rw---- 1 root dialout 4, 66 2011-11-25 19:28 ttyS2 crw-rw---- 1 root dialout 4, 84 2011-11-25 19:28 ttyS20 crw-rw---- 1 root dialout 4, 85 2011-11-25 19:28 ttyS21 crw-rw---- 1 root dialout 4, 86 2011-11-25 19:28 ttyS22 crw-rw---- 1 root dialout 4, 87 2011-11-25 19:28 ttyS23 crw-rw---- 1 root dialout 4, 88 2011-11-25 19:28 ttyS24 crw-rw---- 1 root dialout 4, 89 2011-11-25 19:28 ttyS25 crw-rw---- 1 root dialout 4, 90 2011-11-25 19:28 ttyS26 crw-rw---- 1 root dialout 4, 91 2011-11-25 19:28 ttyS27 crw-rw---- 1 root dialout 4, 92 2011-11-25 19:28 ttyS28 crw-rw---- 1 root dialout 4, 93 2011-11-25 19:28 ttyS29 crw-rw---- 1 root dialout 4, 67 2011-11-25 19:28 ttyS3 crw-rw---- 1 root dialout 4, 94 2011-11-25 19:28 ttyS30 crw-rw---- 1 root dialout 4, 95 2011-11-25 19:28 ttyS31 crw-rw---- 1 root dialout 4, 68 2011-11-25 19:28 ttyS4 crw-rw---- 1 root dialout 4, 69 2011-11-25 19:28 ttyS5 crw-rw---- 1 root dialout 4, 70 2011-11-25 19:28 ttyS6 crw-rw---- 1 root dialout 4, 71 2011-11-25 19:28 ttyS7 crw-rw---- 1 root dialout 4, 72 2011-11-25 19:28 ttyS8 crw-rw---- 1 root dialout 4, 73 2011-11-25 19:28 ttyS9 d rwxr-xr-x 3 root root 60 2011-11-25 19:28 .udev crw-r----- 1 root root 10, 223 2011-11-25 19:28 uinput crw-rw-rw- 1 root root 1, 9 2011-11-25 19:28 urandom drwxr-xr-x 2 root root 60 2011-11-25 19:27 usb crw------- 1 root root 252, 0 2011-11-25 19:28 usbmon0 crw------- 1 root root 252, 1 2011-11-25 19:28 usbmon1 crw------- 1 root root 252, 2 2011-11-25 19:28 usbmon2 crw------- 1 root root 252, 3 2011-11-25 19:28 usbmon3 crw------- 1 root root 252, 4 2011-11-25 19:28 usbmon4 crw------- 1 root root 252, 5 2011-11-25 19:28 usbmon5 crw------- 1 root root 252, 6 2011-11-25 19:28 usbmon6 crw------- 1 root root 252, 7 2011-11-25 19:28 usbmon7 crw------- 1 root root 252, 8 2011-11-25 19:28 usbmon8 drwxr-xr-x 4 root root 80 2011-11-25 19:28 v4l crw------- 1 root root 10, 57 2011-11-25 19:28 vboxdrv crw------- 1 root root 10, 56 2011-11-25 19:28 vboxnetctl drwxr-x--- 4 root vboxusers 80 2011-11-25 19:28 vboxusb crw-rw---- 1 root tty 7, 0 2011-11-25 19:28 vcs crw-rw---- 1 root tty 7, 1 2011-11-25 19:28 vcs1 crw-rw---- 1 root tty 7, 2 2011-11-25 19:28 vcs2 crw-rw---- 1 root tty 7, 3 2011-11-25 19:28 vcs3 crw-rw---- 1 root tty 7, 4 2011-11-25 19:28 vcs4 crw-rw---- 1 root tty 7, 5 2011-11-25 19:28 vcs5 crw-rw---- 1 root tty 7, 6 2011-11-25 19:28 vcs6 crw-rw---- 1 root tty 7, 128 2011-11-25 19:28 vcsa crw-rw---- 1 root tty 7, 129 2011-11-25 19:28 vcsa1 crw-rw---- 1 root tty 7, 130 2011-11-25 19:28 vcsa2 crw-rw---- 1 root tty 7, 131 2011-11-25 19:28 vcsa3 crw-rw---- 1 root tty 7, 132 2011-11-25 19:28 vcsa4 crw-rw---- 1 root tty 7, 133 2011-11-25 19:28 vcsa5 crw-rw---- 1 root tty 7, 134 2011-11-25 19:28 vcsa6 crw------- 1 root root 10, 63 2011-11-25 19:28 vga_arbiter crw-rw----+ 1 root video 81, 0 2011-11-25 19:28 video0 crw-rw-rw- 1 root root 1, 5 2011-11-25 19:28 zero sg_scan -i gives me: sudo sg_scan -i /dev/sg0: scsi0 channel=0 id=0 lun=0 [em] ATA ST31000524NS SN12 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg1: scsi0 channel=0 id=1 lun=0 [em] ATA WDC WD15EADS-00S 01.0 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg2: scsi2 channel=0 id=0 lun=0 [em] HL-DT-ST BDDVDRW CH08LS10 2.00 [rmb=1 cmdq=0 pqual=0 pdev=0x5] sg_map gives me: /dev/sg0 /dev/sda /dev/sg1 /dev/sdb /dev/sg2 /dev/scd0 lsscsi -l gives me: [0:0:0:0] disk ATA ST31000524NS SN12 /dev/sda state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 [0:0:1:0] disk ATA WDC WD15EADS-00S 01.0 /dev/sdb state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 [2:0:0:0] cd/dvd HL-DT-ST BDDVDRW CH08LS10 2.00 /dev/sr0 state=running queue_depth=1 scsi_level=6 type=5 device_blocked=0 timeout=30 my udf mod is: filename: /lib/modules/3.0.0-14-generic/kernel/fs/udf/udf.ko license: GPL description: Universal Disk Format Filesystem author: Ben Fennema srcversion: 6ABDE012374D96B9685B8E5 depends: crc-itu-t vermagic: 3.0.0-14-generic SMP mod_unload modversions Do I need special drivers or mods enabled? Do I need to change some BIOS settings? edit: Somehow I am now able to fire the mount command without any filesystem errors, but now I get: mount: no medium found on /dev/sr0

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • My IIS server won't serve SSL sites to some browsers

    - by sbleon
    (Update: This is now cross-posted at http://stackoverflow.com/questions/3355000. This is the more appropriate forum, but StackOverflow gets a lot more traffic.) I've got an IIS 6.0 server that won't serve pages over SSL to some browsers. In Webkit-based browsers on OS X 10.6, I can't load pages at all. In MSIE 8 on Windows XP SP3, I can load pages, but it will sometimes hang downloading images or sending POSTs. Working: Firefox 3.6 (OS X + Windows) Chrome (Windows) Partially Working: MSIE 8 (works sometimes, but hangs up, especially on POSTs) Not Working: Chrome 5 (OS X) Safari 5 (OS X) Mobile Safari (iOS 4) On OS X (the easiest platform for me to test on), Chrome and Firefox both negotiate the same TLS Cipher, but Chrome hangs on or after the post-negotiation handshake. Chrome packet capture (via ssldump): 1 1 0.0485 (0.0485) C>S Handshake ClientHello Version 3.1 cipher suites Unknown value 0xc00a Unknown value 0xc009 Unknown value 0xc007 Unknown value 0xc008 Unknown value 0xc013 Unknown value 0xc014 Unknown value 0xc011 Unknown value 0xc012 Unknown value 0xc004 Unknown value 0xc005 Unknown value 0xc002 Unknown value 0xc003 Unknown value 0xc00e Unknown value 0xc00f Unknown value 0xc00c Unknown value 0xc00d Unknown value 0x2f TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 Unknown value 0x35 TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0x32 Unknown value 0x33 Unknown value 0x38 Unknown value 0x39 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA compression methods NULL 1 2 0.3106 (0.2620) S>C Handshake ServerHello Version 3.1 session_id[32]= bb 0e 00 00 7a 7e 07 50 5e 78 48 cf 43 5a f7 4d d2 ed 72 8f ff 1d 9e 74 66 74 03 b3 bb 92 8d eb cipherSuite TLS_RSA_WITH_RC4_128_MD5 compressionMethod NULL Certificate ServerHelloDone 1 3 0.3196 (0.0090) C>S Handshake ClientKeyExchange 1 4 0.3197 (0.0000) C>S ChangeCipherSpec 1 5 0.3197 (0.0000) C>S Handshake [hang, no more data transmitted] Firefox packet capture: 1 1 0.0485 (0.0485) C>S Handshake ClientHello Version 3.1 resume [32]= 14 03 00 00 4e 28 de aa da 7a 25 87 25 32 f3 a7 ae 4c 2d a0 e4 57 cc dd d7 0e d7 82 19 f7 8f b9 cipher suites Unknown value 0xff Unknown value 0xc00a Unknown value 0xc014 Unknown value 0x88 Unknown value 0x87 Unknown value 0x39 Unknown value 0x38 Unknown value 0xc00f Unknown value 0xc005 Unknown value 0x84 Unknown value 0x35 Unknown value 0xc007 Unknown value 0xc009 Unknown value 0xc011 Unknown value 0xc013 Unknown value 0x45 Unknown value 0x44 Unknown value 0x33 Unknown value 0x32 Unknown value 0xc00c Unknown value 0xc00e Unknown value 0xc002 Unknown value 0xc004 Unknown value 0x96 Unknown value 0x41 TLS_RSA_WITH_RC4_128_MD5 TLS_RSA_WITH_RC4_128_SHA Unknown value 0x2f Unknown value 0xc008 Unknown value 0xc012 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA Unknown value 0xc00d Unknown value 0xc003 Unknown value 0xfeff TLS_RSA_WITH_3DES_EDE_CBC_SHA compression methods NULL 1 2 0.0983 (0.0497) S>C Handshake ServerHello Version 3.1 session_id[32]= 14 03 00 00 4e 28 de aa da 7a 25 87 25 32 f3 a7 ae 4c 2d a0 e4 57 cc dd d7 0e d7 82 19 f7 8f b9 cipherSuite TLS_RSA_WITH_RC4_128_MD5 compressionMethod NULL 1 3 0.0983 (0.0000) S>C ChangeCipherSpec 1 4 0.0983 (0.0000) S>C Handshake 1 5 0.1019 (0.0035) C>S ChangeCipherSpec 1 6 0.1019 (0.0000) C>S Handshake 1 7 0.1019 (0.0000) C>S application_data 1 8 0.2460 (0.1440) S>C application_data 1 9 0.3108 (0.0648) S>C application_data 1 10 0.3650 (0.0542) S>C application_data 1 11 0.4188 (0.0537) S>C application_data 1 12 0.4580 (0.0392) S>C application_data 1 13 0.4831 (0.0251) S>C application_data [etc] Update: Here's a Wireshark capture from the server end. What's going on with those two much-delayed RST packets? Is that just IIS terminating what it perceives as a non-responsive connection? 19 10.129450 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=3 TSV=699250189 TSER=0 20 10.129517 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [SYN, ACK] Seq=0 Ack=1 Win=16384 Len=0 MSS=1460 WS=0 TSV=0 TSER=0 21 10.168596 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSV=699250189 TSER=0 22 10.172950 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Client Hello 23 10.173267 10.100.xxx.xx 67.249.xxx.xxx TCP [TCP segment of a reassembled PDU] 24 10.173297 10.100.xxx.xx 67.249.xxx.xxx TCP [TCP segment of a reassembled PDU] 25 10.385180 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=148 Ack=2897 Win=524280 Len=0 TSV=699250191 TSER=163006 26 10.385235 10.100.xxx.xx 67.249.xxx.xxx TLSv1 Server Hello, Certificate, Server Hello Done 27 10.424682 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=148 Ack=4215 Win=524280 Len=0 TSV=699250192 TSER=163008 28 10.435245 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Client Key Exchange 29 10.438522 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Change Cipher Spec 30 10.438553 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [ACK] Seq=4215 Ack=421 Win=65115 Len=0 TSV=163008 TSER=699250192 31 10.449036 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Encrypted Handshake Message 32 10.580652 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [ACK] Seq=4215 Ack=458 Win=65078 Len=0 TSV=163010 TSER=699250192 7312 57.315338 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50644 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0 19531 142.316425 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [RST, ACK] Seq=4215 Ack=458 Win=0 Len=0

    Read the article

< Previous Page | 951 952 953 954 955 956 957 958 959 960 961 962  | Next Page >