Search Results

Search found 1256 results on 51 pages for 'explicit'.

Page 18/51 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • ftp-client works fine. ftp-tls-client fails on one computer and works on another

    - by ispiro
    Connecting to ftp - from a Windows Server 2012 - it works both secure (-over tls) and unsecure. From a Windows 7 it succeeds unsecure but fails when secure. (Using explicit TLS and passive mode.) filezilla: 234 AUTH command ok. Expecting TLS Negotiation. Initializing TLS... Connection timed out I've tried many things but nothing helps. (I'm also trying this programmatically. For details see: http://stackoverflow.com/questions/25393716/ftp-ssl-fails-after-expecting-tls-negotiation ) The fact that it does succeed from one computer proves that the ftp server is fine. And the fact that the Windows 7 computer succeeds without tls proves that it's not a NAT/firewall problem (besides, it failed even after disabling firewall etc.). I'm not sure where to start looking. Perhaps a difference between PC Windows and Windows Server? EDIT The ftp server is on a Windows Server 2012.

    Read the article

  • New cast exception with VS2010/.Net 4

    - by Trevor
    [ Updated 25 May 2010 ] I've recently upgraded from VS2008 to VS2010, and at the same time upgraded to .Net 4. I've recompiled an existing solution of mine and I'm encountering a Cast exception I did not have before. The structure of the code is simple (although the actual implementation somewhat more complicated). Basically I have: public class SomeClass : ISomeClass { // Stuff } public static class ClassFactory { public static IInterface GetClassInstance<IInterface>(Type classType) { return (IInterface)Activator.CreateInstance(classType); // This throws a cast exception } } // Call the factory with: ISomeClass anInstance = ClassFactory.GetClassInstance<ISomeClass>(typeof(SomeClass)); Ignore the 'sensibleness' of the above - its provides just a representation of the issue rather than the specifics of what I'm doing (e.g. constructor parameters have been removed). The marked line throws the exception: Unable to cast object of type 'Namespace.SomeClass' to type 'Namespace.ISomeClass'. I suspect it may have something to do with the additional DotNet security (and in particular, explicit loading of assemblies, as this is something my app does). The reason I suspect this is that I have had to add to the config file the setting: <runtime> <loadFromRemoteSources enabled="true" /> </runtime> .. but I'm unsure if this is related. Update I see (from comments) that my basic code does not reproduce the issue by itself. Not surprising I suppose. It's going to be tricky to identify which part of a largish 3-tier CQS system is relevant to this problem. One issue might be that there are multiple assemblies involved. My static class is actually a factory provider, and the 'SomeClass' is a class factory (relevant in that the factories are 'registered' within the app via explicit assembly/type loading - see below) . Upfront I use reflection to 'register' all factories (i.e. classes that implement a particular interface) and that I do this when the app starts by identifying the relevant assemblies, loading them and adding them to a cache using (in essence): Loop over (file in files) { Assembly assembly = Assembly.LoadFile(file); baseAssemblyList.Add(assembly); } Then I cache the available types in these assemblies with: foreach (Assembly assembly in _loadedAssemblyList) { Type[] assemblyTypes = assembly.GetTypes(); _loadedTypesCache.AddRange(assemblyTypes); } And then I use this cache to do a variety of reflection operations, including 'registering' of factories, which involves looping through all loaded (cached) types and finding those that implement the (base) Factory interface. I've experienced what may be a similar problem in the past (.Net 3.5, so not exactly the same) with an architecture that involved dynamically creating classes on the server and streaming the compiled binary of those classes to the client app. The problem came when trying to deserialize an instance of the dynamic class on the client from a remote call: the exception said the class type was not know, even though the source and destination types were exactly the same name (including namespace). Basically the cross boundry versions of the class were not recognised as being the same. I solved that by intercepting the deserialization process and explicitly defining the deseriazation class type in the context of the local assemblies. This experience is what makes me think the types are considered mismatched because (somehow) the interface of the actual SomeClass object, and the interface of passed into the Generic method are not considered the same type. So (possibly) my question for those more knowledgable about C#/DotNet is: How does the class loading work that somehow my app thinks there are two versions/types of the interface type and how can I fit that? [ whew ... anyone who got here is quite patient .. thanks ]

    Read the article

  • XML and XSD - use element name as replacement of xsi:type for polymorphism

    - by disown
    Taking the W3C vehicle XSD as an example: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://cars.example.com/schema" xmlns:target="http://cars.example.com/schema"> <complexType name="Vehicle" abstract="true"/> <complexType name="Car"> <complexContent> <extension base="target:Vehicle"/> ... </complexContent> </complexType> <complexType name="Plane"> <complexContent> <extension base="target:Vehicle"/> <sequence> <element name="wingspan" type="integer"/> </sequence> </complexContent> </complexType> </schema> , and the following definition of 'meansOfTravel': <complexType name="MeansOfTravel"> <complexContent> <sequence> <element name="transport" type="target:Vehicle"/> </sequence> </complexContent> </complexType> <element name="meansOfTravel" type="target:MeansOfTravel"/> With this definition you need to specify the type of your instance using xsi:type, like this: <meansOfTravel> <transport xsi:type="Plane"> <wingspan>3</wingspan> </transport> </meansOfTravel> I would just like to acheive a 'name of type' - 'name of element' mapping so that this could be replaced with just <meansOfTravel> <plane> <wingspan>3</wingspan> </plane> </meansOfTravel> The only way I could do this until now is by making it explicit: <complexType name="MeansOfTravel"> <sequence> <choice> <element name="plane" type="target:Plane"/> <element name="car" type="target:Car"/> </choice> </sequence> </complexType> <element name="meansOfTravel" type="target:MeansOfTravel"/> But this means that I have to list all possible sub-types in the 'MeansOfTravel' complex type. Is there no way of making the XML parser assume that you mean a 'Plane' if you call the element 'plane'? Or do I have to make the choice explicit? I would just like to keep my design DRY - if you have any other suggestions (like groups or so) - i would love to hear them.

    Read the article

  • Using VBA / Macro to highlight changes in excel

    - by Zaj
    I have a spread sheet that I send out to various locations to have information on it updated and then sent back to me. However, I had to put validation and lock the cells to force users to input accurate information. Then I can to use VBA to disable the work around of cut copy and paste functions. And additionally I inserted a VBA function to force users to open the excel file in Macros. Now I'm trying to track the changes so that I know what was updated when I recieve the sheet back. However everytime i do this I get an error when someone savesthe document and randomly it will lock me out of the document completely. I have my code pasted below, can some one help me create code in the VBA forum to highlight changes instead of through excel's share/track changes option? ThisWorkbook (Code): Option Explicit Const WelcomePage = "Macros" Private Sub Workbook_BeforeClose(Cancel As Boolean) Call ToggleCutCopyAndPaste(True) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Evaluate if workbook is saved and emulate default propmts With ThisWorkbook If Not .Saved Then Select Case MsgBox("Do you want to save the changes you made to '" & .Name & "'?", _ vbYesNoCancel + vbExclamation) Case Is = vbYes 'Call customized save routine Call CustomSave Case Is = vbNo 'Do not save Case Is = vbCancel 'Set up procedure to cancel close Cancel = True End Select End If 'If Cancel was clicked, turn events back on and cancel close, 'otherwise close the workbook without saving further changes If Not Cancel = True Then .Saved = True Application.EnableEvents = True .Close savechanges:=False Else Application.EnableEvents = True End If End With End Sub Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Call customized save routine and set workbook's saved property to true '(To cancel regular saving) Call CustomSave(SaveAsUI) Cancel = True 'Turn events back on an set saved property to true Application.EnableEvents = True ThisWorkbook.Saved = True End Sub Private Sub Workbook_Open() Call ToggleCutCopyAndPaste(False) 'Unhide all worksheets Application.ScreenUpdating = False Call ShowAllSheets Application.ScreenUpdating = True End Sub Private Sub CustomSave(Optional SaveAs As Boolean) Dim ws As Worksheet, aWs As Worksheet, newFname As String 'Turn off screen flashing Application.ScreenUpdating = False 'Record active worksheet Set aWs = ActiveSheet 'Hide all sheets Call HideAllSheets 'Save workbook directly or prompt for saveas filename If SaveAs = True Then newFname = Application.GetSaveAsFilename( _ fileFilter:="Excel Files (*.xls), *.xls") If Not newFname = "False" Then ThisWorkbook.SaveAs newFname Else ThisWorkbook.Save End If 'Restore file to where user was Call ShowAllSheets aWs.Activate 'Restore screen updates Application.ScreenUpdating = True End Sub Private Sub HideAllSheets() 'Hide all worksheets except the macro welcome page Dim ws As Worksheet Worksheets(WelcomePage).Visible = xlSheetVisible For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVeryHidden Next ws Worksheets(WelcomePage).Activate End Sub Private Sub ShowAllSheets() 'Show all worksheets except the macro welcome page Dim ws As Worksheet For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVisible Next ws Worksheets(WelcomePage).Visible = xlSheetVeryHidden End Sub Private Sub Workbook_Activate() Call ToggleCutCopyAndPaste(False) End Sub Private Sub Workbook_Deactivate() Call ToggleCutCopyAndPaste(True) End Sub This is in my ModuleCode: Option Explicit Sub ToggleCutCopyAndPaste(Allow As Boolean) 'Activate/deactivate cut, copy, paste and pastespecial menu items Call EnableMenuItem(21, Allow) ' cut Call EnableMenuItem(19, Allow) ' copy Call EnableMenuItem(22, Allow) ' paste Call EnableMenuItem(755, Allow) ' pastespecial 'Activate/deactivate drag and drop ability Application.CellDragAndDrop = Allow 'Activate/deactivate cut, copy, paste and pastespecial shortcut keys With Application Select Case Allow Case Is = False .OnKey "^c", "CutCopyPasteDisabled" .OnKey "^v", "CutCopyPasteDisabled" .OnKey "^x", "CutCopyPasteDisabled" .OnKey "+{DEL}", "CutCopyPasteDisabled" .OnKey "^{INSERT}", "CutCopyPasteDisabled" Case Is = True .OnKey "^c" .OnKey "^v" .OnKey "^x" .OnKey "+{DEL}" .OnKey "^{INSERT}" End Select End With End Sub Sub EnableMenuItem(ctlId As Integer, Enabled As Boolean) 'Activate/Deactivate specific menu item Dim cBar As CommandBar Dim cBarCtrl As CommandBarControl For Each cBar In Application.CommandBars If cBar.Name <> "Clipboard" Then Set cBarCtrl = cBar.FindControl(ID:=ctlId, recursive:=True) If Not cBarCtrl Is Nothing Then cBarCtrl.Enabled = Enabled End If Next End Sub Sub CutCopyPasteDisabled() 'Inform user that the functions have been disabled MsgBox " Cutting, copying and pasting have been disabled in this workbook. Please hard key in data. " End Sub

    Read the article

  • SQL SERVER – Information Related to DATETIME and DATETIME2

    - by pinaldave
    I recently received interesting comment on the blog regarding workaround to overcome the precision issue while dealing with DATETIME and DATETIME2. I have written over this subject earlier over here. SQL SERVER – Difference Between GETDATE and SYSDATETIME SQL SERVER – Difference Between DATETIME and DATETIME2 – WITH GETDATE SQL SERVER – Difference Between DATETIME and DATETIME2 SQL Expert Jing Sheng Zhong has left following comment: The issue you found in SQL server new datetime type is related time source function precision. Folks have found the root reason of the problem – when data time values are converted (implicit or explicit) between different data type, which would lose some precision, so the result cannot match each other as thought. Here I would like to gave a work around solution to solve the problem which the developers met. -- Declare and loop DECLARE @Intveral INT, @CurDate DATETIMEOFFSET; CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2, GlobalDate DATETIMEOFFSET) SET @Intveral = 10000 WHILE (@Intveral > 0) BEGIN ----SET @CurDate = SYSDATETIMEOFFSET(); -- higher precision for future use only SET @CurDate = TODATETIMEOFFSET(GETDATE(),DATEDIFF(N,GETUTCDATE(),GETDATE())); -- lower precision to match exited date process INSERT #TimeTable (FirstDate, LastDate, GlobalDate) VALUES (@CurDate, @CurDate, @CurDate) SET @Intveral = @Intveral - 1 END GO -- Distinct Values SELECT COUNT(DISTINCT FirstDate) D_DATETIME, COUNT(DISTINCT LastDate) D_DATETIME2, COUNT(DISTINCT GlobalDate) D_SYSGETDATE FROM #TimeTable GO -- Join SELECT DISTINCT a.FirstDate,b.LastDate, b.GlobalDate, CAST(b.GlobalDate AS DATETIME) GlobalDateASDateTime FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = CAST(b.GlobalDate AS DATETIME) GO -- Select SELECT * FROM #TimeTable GO -- Clean up DROP TABLE #TimeTable GO If you read my blog SQL SERVER – Difference Between DATETIME and DATETIME2 you will notice that I have achieved the same using GETDATE(). Are you using DATETIME2 in your production environment? If yes, I am interested to know the use case. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How do I create a popup banner before login with Lightdm?

    - by Rich Loring
    When Ubuntu was using gnome I was able to create a popup banner like the banner below before the login screen using zenity in the /etc/gdm/Init/Default. The line of code would be like this: if [ -f "/usr/bin/zenity" ]; then /usr/bin/zenity --info --text="`cat /etc/issue`" --no-wrap; else xmessage -file /etc/issue -button ok -geometry 540X480; fi How can I accomplish this with Unity? NOTICE TO USERS This is a Federal computer system (and/or it is directly connected to a BNL local network system) and is the property of the United States Government. It is for authorized use only. Users (authorized or unauthorized) have no explicit or implicit expectation of privacy. Any or all uses of this system and all files on this system may be intercepted, monitored, recorded, copied, audited, inspected, and disclosed to authorized site, Department of Energy, and law enforcement personnel, as well as authorized officials of other agencies, both domestic and foreign. By using this system, the user consents to such interception, monitoring, recording, copying, auditing, inspection, and disclosure at the discretion of authorized site or Department of Energy personnel. Unauthorized or improper use of this system may result in administrative disciplinary action and civil and criminal penalties. By continuing to use this system you indicate your awareness of and consent to these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning.

    Read the article

  • Sync Blocker Stops iTunes from Automatically Syncing

    - by ETC
    If you’re looking to put a end to iTunes overly aggressive syncing, Sync Blocker is a free application that puts an end to automatic iTunes synchronization and keeps your iPad, iPhone, iPod, and iPod Touch data from being accidentally deleted. There are settings within iTunes you can toggle and even keyboard shortcuts you can use to temporarily suspend the syncing while mounting your iOS device. If you want to skip that hassle, however, and rest easy knowing that your iOS device will only be synced and updated when you give an explicit go ahead, Sync Blocker is a free application for both Windows and Mac OS X machines that completely blocks iTunes from syncing without your permission. Previously $22, Sync Blocker is now free. Hit up the link below for additional information and to grab a copy of the software. Sync Blocker [Zelek Software via Addictive Tips] Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • C#/.NET Little Wonders: Tuples and Tuple Factory Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can really help improve your code by making it easier to write and maintain.  This week, we look at the System.Tuple class and the handy factory methods for creating a Tuple by inferring the types. What is a Tuple? The System.Tuple is a class that tends to inspire a reaction in one of two ways: love or hate.  Simply put, a Tuple is a data structure that holds a specific number of items of a specific type in a specific order.  That is, a Tuple<int, string, int> is a tuple that contains exactly three items: an int, followed by a string, followed by an int.  The sequence is important not only to distinguish between two members of the tuple with the same type, but also for comparisons between tuples.  Some people tend to love tuples because they give you a quick way to combine multiple values into one result.  This can be handy for returning more than one value from a method (without using out or ref parameters), or for creating a compound key to a Dictionary, or any other purpose you can think of.  They can be especially handy when passing a series of items into a call that only takes one object parameter, such as passing an argument to a thread's startup routine.  In these cases, you do not need to define a class, simply create a tuple containing the types you wish to return, and you are ready to go? On the other hand, there are some people who see tuples as a crutch in object-oriented design.  They may view the tuple as a very watered down class with very little inherent semantic meaning.  As an example, what if you saw this in a piece of code: 1: var x = new Tuple<int, int>(2, 5); What are the contents of this tuple?  If the tuple isn't named appropriately, and if the contents of each member are not self evident from the type this can be a confusing question.  The people who tend to be against tuples would rather you explicitly code a class to contain the values, such as: 1: public sealed class RetrySettings 2: { 3: public int TimeoutSeconds { get; set; } 4: public int MaxRetries { get; set; } 5: } Here, the meaning of each int in the class is much more clear, but it's a bit more work to create the class and can clutter a solution with extra classes. So, what's the correct way to go?  That's a tough call.  You will have people who will argue quite well for one or the other.  For me, I consider the Tuple to be a tool to make it easy to collect values together easily.  There are times when I just need to combine items for a key or a result, in which case the tuple is short lived and so the meaning isn't easily lost and I feel this is a good compromise.  If the scope of the collection of items, though, is more application-wide I tend to favor creating a full class. Finally, it should be noted that tuples are immutable.  That means they are assigned a value at construction, and that value cannot be changed.  Now, of course if the tuple contains an item of a reference type, this means that the reference is immutable and not the item referred to. Tuples from 1 to N Tuples come in all sizes, you can have as few as one element in your tuple, or as many as you like.  However, since C# generics can't have an infinite generic type parameter list, any items after 7 have to be collapsed into another tuple, as we'll show shortly. So when you declare your tuple from sizes 1 (a 1-tuple or singleton) to 7 (a 7-tuple or septuple), simply include the appropriate number of type arguments: 1: // a singleton tuple of integer 2: Tuple<int> x; 3:  4: // or more 5: Tuple<int, double> y; 6:  7: // up to seven 8: Tuple<int, double, char, double, int, string, uint> z; Anything eight and above, and we have to nest tuples inside of tuples.  The last element of the 8-tuple is the generic type parameter Rest, this is special in that the Tuple checks to make sure at runtime that the type is a Tuple.  This means that a simple 8-tuple must nest a singleton tuple (one of the good uses for a singleton tuple, by the way) for the Rest property. 1: // an 8-tuple 2: Tuple<int, int, int, int, int, double, char, Tuple<string>> t8; 3:  4: // an 9-tuple 5: Tuple<int, int, int, int, double, int, char, Tuple<string, DateTime>> t9; 6:  7: // a 16-tuple 8: Tuple<int, int, int, int, int, int, int, Tuple<int, int, int, int, int, int, int, Tuple<int,int>>> t14; Notice that on the 14-tuple we had to have a nested tuple in the nested tuple.  Since the tuple can only support up to seven items, and then a rest element, that means that if the nested tuple needs more than seven items you must nest in it as well.  Constructing tuples Constructing tuples is just as straightforward as declaring them.  That said, you have two distinct ways to do it.  The first is to construct the tuple explicitly yourself: 1: var t3 = new Tuple<int, string, double>(1, "Hello", 3.1415927); This creates a triple that has an int, string, and double and assigns the values 1, "Hello", and 3.1415927 respectively.  Make sure the order of the arguments supplied matches the order of the types!  Also notice that we can't half-assign a tuple or create a default tuple.  Tuples are immutable (you can't change the values once constructed), so thus you must provide all values at construction time. Another way to easily create tuples is to do it implicitly using the System.Tuple static class's Create() factory methods.  These methods (much like C++'s std::make_pair method) will infer the types from the method call so you don't have to type them in.  This can dramatically reduce the amount of typing required especially for complex tuples! 1: // this 4-tuple is typed Tuple<int, double, string, char> 2: var t4 = Tuple.Create(42, 3.1415927, "Love", 'X'); Notice how much easier it is to use the factory methods and infer the types?  This can cut down on typing quite a bit when constructing tuples.  The Create() factory method can construct from a 1-tuple (singleton) to an 8-tuple (octuple), which of course will be a octuple where the last item is a singleton as we described before in nested tuples. Accessing tuple members Accessing a tuple's members is simplicity itself… mostly.  The properties for accessing up to the first seven items are Item1, Item2, …, Item7.  If you have an octuple or beyond, the final property is Rest which will give you the nested tuple which you can then access in a similar matter.  Once again, keep in mind that these are read-only properties and cannot be changed. 1: // for septuples and below, use the Item properties 2: var t1 = Tuple.Create(42, 3.14); 3:  4: Console.WriteLine("First item is {0} and second is {1}", 5: t1.Item1, t1.Item2); 6:  7: // for octuples and above, use Rest to retrieve nested tuple 8: var t9 = new Tuple<int, int, int, int, int, int, int, 9: Tuple<int, int>>(1,2,3,4,5,6,7,Tuple.Create(8,9)); 10:  11: Console.WriteLine("The 8th item is {0}", t9.Rest.Item1); Tuples are IStructuralComparable and IStructuralEquatable Most of you know about IComparable and IEquatable, what you may not know is that there are two sister interfaces to these that were added in .NET 4.0 to help support tuples.  These IStructuralComparable and IStructuralEquatable make it easy to compare two tuples for equality and ordering.  This is invaluable for sorting, and makes it easy to use tuples as a compound-key to a dictionary (one of my favorite uses)! Why is this so important?  Remember when we said that some folks think tuples are too generic and you should define a custom class?  This is all well and good, but if you want to design a custom class that can automatically order itself based on its members and build a hash code for itself based on its members, it is no longer a trivial task!  Thankfully the tuple does this all for you through the explicit implementations of these interfaces. For equality, two tuples are equal if all elements are equal between the two tuples, that is if t1.Item1 == t2.Item1 and t1.Item2 == t2.Item2, and so on.  For ordering, it's a little more complex in that it compares the two tuples one at a time starting at Item1, and sees which one has a smaller Item1.  If one has a smaller Item1, it is the smaller tuple.  However if both Item1 are the same, it compares Item2 and so on. For example: 1: var t1 = Tuple.Create(1, 3.14, "Hi"); 2: var t2 = Tuple.Create(1, 3.14, "Hi"); 3: var t3 = Tuple.Create(2, 2.72, "Bye"); 4:  5: // true, t1 == t2 because all items are == 6: Console.WriteLine("t1 == t2 : " + t1.Equals(t2)); 7:  8: // false, t1 != t2 because at least one item different 9: Console.WriteLine("t2 == t2 : " + t2.Equals(t3)); The actual implementation of IComparable, IEquatable, IStructuralComparable, and IStructuralEquatable is explicit, so if you want to invoke the methods defined there you'll have to manually cast to the appropriate interface: 1: // true because t1.Item1 < t3.Item1, if had been same would check Item2 and so on 2: Console.WriteLine("t1 < t3 : " + (((IComparable)t1).CompareTo(t3) < 0)); So, as I mentioned, the fact that tuples are automatically equatable and comparable (provided the types you use define equality and comparability as needed) means that we can use tuples for compound keys in hashing and ordering containers like Dictionary and SortedList: 1: var tupleDict = new Dictionary<Tuple<int, double, string>, string>(); 2:  3: tupleDict.Add(t1, "First tuple"); 4: tupleDict.Add(t2, "Second tuple"); 5: tupleDict.Add(t3, "Third tuple"); Because IEquatable defines GetHashCode(), and Tuple's IStructuralEquatable implementation creates this hash code by combining the hash codes of the members, this makes using the tuple as a complex key quite easy!  For example, let's say you are creating account charts for a financial application, and you want to cache those charts in a Dictionary based on the account number and the number of days of chart data (for example, a 1 day chart, 1 week chart, etc): 1: // the account number (string) and number of days (int) are key to get cached chart 2: var chartCache = new Dictionary<Tuple<string, int>, IChart>(); Summary The System.Tuple, like any tool, is best used where it will achieve a greater benefit.  I wouldn't advise overusing them, on objects with a large scope or it can become difficult to maintain.  However, when used properly in a well defined scope they can make your code cleaner and easier to maintain by removing the need for extraneous POCOs and custom property hashing and ordering. They are especially useful in defining compound keys to IDictionary implementations and for returning multiple values from methods, or passing multiple values to a single object parameter. Tweet Technorati Tags: C#,.NET,Tuple,Little Wonders

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Syntax Highlighting for Gherkin (Cucumber Language)

    - by Liam McLennan
    SyntaxHighlighter is the de facto standard for syntax highlighting on the web. I am currently working on a tool for publishing BDD specifications on the web and I want syntax highlighting. Unfortunately, SyntaxHighlighter does not support Gherkin, the language Cucumber and SpecFlow use to define BDD specifications. Writing new language parsers for SyntaxHighlighter is very easy, so I implemented one for Gherkin. Here is what a syntax highlighted Gherkin file looks like: # A comment here Feature: Some terse yet descriptive text of what is desired In order to realize a named business value As a explicit system actor I want to gain some beneficial outcome which furthers the goal @secretlabel Scenario: Some determinable business situation Given some precondition And some other precondition When some action by the actor And some other action And yet another action Then some testable outcome is achieved And something else we can check happens too Like all SyntaxHighlighter brushes to use this one you need to install the brush (shBrushGherkin.js). I have also used a custom theme to get it just the way I wanted it (shThemeGherkin.css). If you would like to use my Gherkin brush you may download the code and example page.

    Read the article

  • FileOpenPicker/FileSavePicker doesn't allow *.* wildcard file associations

    - by mbrit
    On Twitter, Matthias Jauernig commented that the FileOpenPicker and FileSavePicker doesn't allow *.* wildcard file associations. I was relaxed about this and wrote back that it was related to sandboxing implying it was a "good thing", however as Matthias commented back, perhaps it's not.In Metro-style the sandboxing works that if something gives you a file (e.g. the picker, or a share operation), you can access it regardless of where on the system. If you find the file yourself, you have to declare the type.The reason why I think it's related to sandboxing is because if you work with files programmatically you have to be explicit about the file types. This is to stop malware that you think is only interested in - say .PDF files, scanning and uploading any .EML files that it can find on the machine. It follows then on the pickers that restriction would continue. It allow's the retail store team to validate that an app is likely to behave itself. If it's an app that works with images, locking down the picker so that it can only access image file types makes sense.However Matthias mentioned that he has an app that should allow files of any arbitrary file. That fits more into the "if the user selects it, it must be OK" camp than the "programmatic scanning" camp. So now I'm left wondering why the picker doesn't allow any type to be selected.I think then maybe the decision comes down to simplicity. A lot of the decisions in Metro-style design relate to ideas about "zero intimidation". Allow the user to select any file is too much like Old Windows, and not enough like Reimagined Windows. What happens in Matthias's app if the user selects Explorer.exe as the file he or she wants to work with? I guess it's fine if you expect your user to know what they're doing (Old Windows), but not so fine if you're expecting a three year old to work with it (Reimagined Windows).

    Read the article

  • Character progression through leveling, skills or items?

    - by Anton
    I'm working on a design for an RPG game, and I'm having some doubts about the skill and level system. I'm going for a more casual, explorative gaming experience and so thought about lowering the game complexity by simplifying character progression. But I'm having trouble deciding between the following: Progression through leveling, no complex skill progression, leveling increases base stats. Progression through skills, no leveling or base stat changes, skills progress through usage. Progression through items, more focus on stat-changing items, items confer skills, no leveling. However, I'm uncertain what the effects on gameplay might be in the end. So, my question is this: What would be the effects of choosing one of the above alternatives over the others? (Particularly with regards to the style and feel of the gameplay) My take on it is that the first sacrifices more frequent rewards and customization in favor of a simpler gameplay; the second sacrifices explicit customization and player control in favor of more frequent rewards and a somewhat simpler gameplay; while the third sacrifices inventory simplicity and a player metric in favor of player control, customization and progression simplicity. Addendum: I'm not really limiting myself to the above three, they are just the ones I liked most and am primarily interested in.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • How should I configure TRIM Support for LVM logical volumes?

    - by Zack Perry
    I am setting up a notebook for software demo purpose. The machine has a Intel Core i7 CPU, 8GB RAM, a 128GB SSD, and runs Ubuntu 12.04 LTS 64bit desktop. As it is, the SSD is configured to have a single volume group, with /boot, /swap, and / all in their respective logical volumes. They collectively consume 30GB space. I plan to use the remaining for logical volumes for KVM guests, all run Ubuntu 12.04 Server I would like to ensure that the SSD is utilized optimally. Although on this site, there are some great info about setting up TRIM support for file system setups that do not involve LVM, I have not found explicit guide regarding my planned setup. I did found this page which talks about adding issue_discards in /etc/lvm/lvm.conf. But in said file on my machine, I didn't find the cited content. I double-checked man lvm.conf(5), didn't see any mentioning of this option either. Thus, I'm not sure what to do. Furthermore, even say adding the option is the right thing to do, should I in my machine's /etc/fstab still add mount options such as noatime etc? Any tips, pointers, and/or further guidance are greatly appreciated.

    Read the article

  • Documentation often ommits to specify which flavour of regular expression to use, so is there a default flavour that we should all be familiar with?

    - by JW01
    Often I come across documentation that says "use a regular expression here" I have to spend quite some time digging around trying to work out which regular expression format they are expecting. As far as I can tell, there are many types of regular expression. But, at my last place of work I was made to feel stupid when I suggested adding some text to our User Documentation to specify the type of regular expression to be used. When someone says "a regular expression" what is the regular expression syntax most people expect and where is it documented? Update: I was prompted to single-out some examples - but no disrespect to these great projects: eregi docs page is not particularly helpful in explaining expected syntax. Nor can I easily work out the syntax expected here if i just land on the page from a search. No explicit mention of the regular expression pattern expected by PatternExpectation() on the SimpleTest page. etc. etc. here is a highly voted SO answer that seems to assume there is only one flavour of regular expressions.

    Read the article

  • Why do programmers seem to be such bad spellers?

    - by Joel Etherton
    Programming languages are very precise tools based on explicit grammars. They're very picky, and when being used they require an exacting amount of detail. C#, for instance, is case sensitive so even getting the case of an argument wrong will cause an error. Questions asked all over the StackExchange are replete with misspellings, grammatical errors, and other problems that seem to indicate a lack of attention to detail when it comes to the language itself. Now, I understand there are a lot of programmers out there whose native language is not English, and I am not directing this question (rant one might say) at them. I'm referring to the individuals who are clearly from an English speaking background who refuse to pay attention to these simple details. I am not perfect by any means, but I try to use the language correctly so that my meaning will be understood correctly. I find programmers misspelling variable names, classes, and all manner of words in any kind of technical documentation they might write. I have had to withstand code where I am repeatedly referring to the subit[sic] button or HttpWebResponse reponse. The general complaint about bad spelling is one thing, and it will always be there. I accept that. But my question/comment is about the proclivity of bad spelling within the programming community. I would think that people who deal with such exacting tools to be more naturally predisposed towards proper spelling. Yet this doesn't seem to be the case.

    Read the article

  • Writing cross-platforms Types, Interfaces and Classes/Methods in C++

    - by user827992
    I'm looking for the best solution to write cross-platform software, aka code that I write and that I have to interface with different libraries and platforms each time. What I consider the easiest part, correct me if I'm wrong, is the definition of new types, all I have to do is to write an hpp file with a list of typedefs, I can keep the same names for each new type across the different platforms so my codebase can be shared without any problem. typedefs also helps me to redefine a better scope for my types in my code. I will probably end up having something like this: include |-windows | |-types.hpp |-linux | |-types.hpp |-mac |-types.hpp For the interfaces I'm thinking about the same solution used for the types, a series of hpp files, probably I will write all the interfaces only once since they rely on the types and all "cross-platform portability" is ensured by the work done on the types. include | |-interfaces.hpp | |-windows | |-types.hpp |-linux | |-types.hpp |-mac | |-types.hpp For classes and methods I do not have a real answer, I would like to avoid 2 things: the explicit use of pointers the use of templates I want to avoid the use of the pointers because they can make the code less readable for someone and I want to avoid templates just because if I write them, I can't separate the interface from the definition. What is the best option to hide the use of the pointers? I would also like some words about macros and how to implement some OS-specifics calls and definitions.

    Read the article

  • Turn off Windows Defender on your builds

    - by george_v_reilly
    I've spent some time this evening profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files. Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive. A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6. Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have. I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe. What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction. The moral of this story is: don't let your virus checker run on your builds.

    Read the article

  • How atomic is a SELECT INTO?

    - by leo.pasta
    Last week I got an interesting situation that prompted me to challenge a long standing assumption. I always thought that a SELECT INTO was an atomic statement, i.e. it would either complete successfully or the table would not be created. So I got very surprised when, after a “select into” query was chosen as a deadlock victim, the next execution (as the app would handle the deadlock and retry) would fail with: Msg 2714, Level 16, State 6, Line 1 There is already an object named '#test' in the database. The only hypothesis we could come up was that the “create table” part of the statement was committed independently from the actual “insert”. We can confirm that by capturing the “Transaction Log” event on Profiler (filtering by SPID0). The result is that when we run: SELECT * INTO #results FROM master.sys.objects we get the following output on Profiler: It is easy to see the two independent transactions. Although this behaviour was a surprise to me, it is very easy to workaround it if you feel the need (as we did in this case). You can either change it into independent “CREATE TABLE / INSERT SELECT” or you can enclose the SELECT INTO in an explicit transaction: SET XACT_ABORT ON BEGIN TRANSACTION SELECT * INTO #results FROM master.sys.objects COMMIT

    Read the article

  • Re-writing URL's with lighttpd

    - by Tim Post
    I'm using Lighttpd to serve a GET based API that I'm working on, and I'm having some difficulty with re-writing requests. My API calls are very simple. An example would be : url:/method/submethod?var1=something&var2=something&key=something This is what I have: url.rewrite-once = ( "^/methodfoo(.*)" => "/index.php$1&method=methodfoo") This works fine if all methods were shallow, but I have methodfoo/submethod to deal with. What I'd like to do is use a rule that can split this up for me, appending a &submethod to the end of the rewritten string. For instance: url://methodfoo/submethod?foo=bar&foobar=foo Would be re-written to: url://index.php?foo=bar&foobar=foo&method=methodfoo&submethod=foo Can I do that without an explicit rule for each submethod? Additional Information: Yes, I know I can use a rule like: "^/methodfoo/(.*)/(.*)" => "/index.php$2&method=methodfoo&submethod=$1" However, That fuglifies (TM) my link structure, as it would have to match: url://methodfoo/submethod/?foo=bar&foobar=foo When I really want: url://methodfoo/submethod?foo=bar&foobar=foo Thanks in advance for any suggestions.

    Read the article

  • controlling an object through another object ?

    - by Stefano Borini
    Today I've seen the following pattern: you have an object A and an object B. Object B accepts a pointer to A at its constructor. Once B is created, there's a method B.doCalc() that performs a calculation (internally using A's information). The result is obtained with method B.getResult(). In order to perform another calculation, A is modified, and B.doCalc() is called again. What is your opinion on this choice ? I would have designed it differently, but I want to hear your voice. Edit : note that my main objection is to modify A to have a different result from B, without touching B. Although similar, I think that just this discipline expresses a much better feeling of what's going on. Instead of a = new A a.whatever = 5 b = new B(a) b.doCalc() res = b.getResult() a.whatever = 6 b.doCalc() res = b.getResult() You get the a pointer object from b itself. a = new A a.whatever = 5 b = new B(a) b.doCalc() res = b.getResult() a = b.getAPointer() a.whatever = 6 b.doCalc() res = b.getResult() because it makes more explicit the fact that a is taken from b and then modified. I still don't like it, though...

    Read the article

  • Xfce gets really confused about session saving, etc

    - by Pointy
    I'm getting a new laptop running with 11.04 Ubuntu. I've got the xfce4 packages all installed, which is something I've had no problems with on any of my other machines. On this new laptop, however, though I can log in and use an xfce session without any problems, logging out of a session is problematic: I click the "Log out" widget from the panel and then "Log out" from its option dialog. Then the thing just sits there, not logging out. Subsequent attempts to open the "Log out" widget fail with an error about the session manager being busy. After maybe a minute or so, it logs out. Though I've got the "Save session" option checked in the log out dialog, xfce just makes a complete hash of the business. It does remember the applications that I had running, but it seems to forget about the window manager (!!) and the workspace configuration. I don't log in/out that often, and generally I don't care much about restarting applications, but the window manager being missing is of course pretty annoying. I like xfce because it's simple and unobtrusive and usually works pretty well. I've never experienced this, and I've got two other machines also running 11.04 with pretty much the same setup (straight Ubuntu install with xfce4 packages added). Is there some good way to diagnose stuff like that? edit — well I nuked my session cache, did an explicit save from the session widget, and now it works. Well, it doesn't save the workspace location for each client and instead opens them all up on the first workspace, but I think that may be because, in the session, xfwm4 is the last thing in the "Client" list, so before it's started all the other clients just pile up in the first (and only) workspace. I'm still curious about how exactly it gets so messed up. I certainly wasn't knowingly attempting anything fancy or unorthodox, though I may have done something fishy inadvertently.

    Read the article

  • Little mysterious RowMatch

    - by kishore.kondepudi(at)oracle.com
    Incidentally this was the first piece of code i ever wrote in ADF.The requirement was we have tax rates which are read from a table.And there can be different type of tax rates called certificates or exceptions based on the rate_type column in the tax rates table.The simplest design i chose was to create an EO on the tax rates table and create two VO's called CertificateVO and ExceptionVO based on the same EO.So far so good.I wrote all the business logic in the EO and completed the model project.The CertificateVO has the query as select * from tax_rates TaxRateEO where rate_type='CERTIFICATE' and similary the ExceptionVO is also built.The UI is pretty simple and it has two tabs called Certificates and Exceptions and each table has a button to create a tax rate.The certificate tab is driven by CertificateVO and exception tab is driven by ExceptionVO.The CertificateVO has default value of rate_type set to 'CERTIFICATE' and ExceptionVO has default value of rate_type to 'EXCEPTION' to default values for new records.So far so good.But on running the UI i noticed a strange thing,When i create a new row in Certificate i see the same row in Exception too and vice-versa.i.e; what ever row i create in one VO it also appears in the second one although it shouldn't be.I couldn't understand the reason for behavior even though an explicit where clause is present.Digging through documentation i found that ADF doesnt apply the where clause to new rows instead it applies something called as RowMatch to them.RowMatch in simple terms is a where condition applied to the VO rows at runtime.Since we had both VO's based on the same EO we have the same entity cache.The filter factor for new rows to be shown in VO at runtime is actually RowMatch than the where clause defined in the VO.The default RowMatch is empty as a result any new row appears in both the VO's since its from same entity cache.The solution to this problem is to use polymorphic view objects which can do the row filter based on configuration or override the getRowMatch() method in the VOImpl and pass the custom where filter instead of default RowMatch.Eg:@Overridepublic RowMatch getRowMatch(){    return new RowMatch("rate_type='CERTIFICATE'");}similarly for ExceptionVO too.With proper RowMatch in place new rows will route themselves to appropriate VO.PS: The behavior(Same row pushed to both VO's from entity cache) is also called as ViewLink Consistency.Try it out!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >