Search Results

Search found 4882 results on 196 pages for 'odd behaviour'.

Page 1/196 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • virus behaviour but couldn't find any virus on windows7

    - by coder247
    Hi Friends, I found strange virus like behaviour on my pc running windows 7. I'm not able to type properly with my keyboard. Some times it won't respond, and some other times it adds characters indefinitely to the typing area. When i click on desktop it creates new folders. When press ctrl+N on a browser windows it opens unlimited number of windows. I tried with Kaspersky and Eset antivirus trial versions but couldn't find any virus. I don't get this behaviour always.. but 50% of times... Thanks....

    Read the article

  • virus behaviour but couldn't find any virus on windows7

    - by coder247
    Hi Friends, I found strange virus like behaviour on my pc running windows 7. I'm not able to type properly with my keyboard. Some times it won't respond, and some other times it adds characters indefinitely to the typing area. When i click on desktop it creates new folders. When press ctrl+N on a browser windows it opens unlimited number of windows. I tried with Kaspersky and Eset antivirus trial versions but couldn't find any virus. I don't get this behaviour always.. but 50% of times... Thanks....

    Read the article

  • Optimize code performance when odd/even threads are doing different things in CUDA

    - by Orion Nebula
    Hi all! I have two large vectors, I am trying to do some sort of element multiplication, where an even-numbered element in the first vector is multiplied by the next odd-numbered element in the second vector .... and where the odd-numbered element in the first vector is multiplied by the preceding even-numbered element in the second vector Ex. vector 1 is V1(1) V1(2) V1(3) V1(4) vector 2 is V2(1) V2(2) V2(3) V2(4) V1(1) * V2(2) V1(3) * V2(4) V1(2) * V2(1) V1(4) * V2(3) I have written a Cuda code to do this: (Pds has the elements of the first vector in shared memory, Nds the second Vector) //instead of using %2 .. i check for the first bit to decide if number is odd/even -- faster if ((tx & 0x0001) == 0x0000) Nds[tx+1] = Pds[tx] * Nds[tx+1]; else Nds[tx-1] = Pds[tx] * Nds[tx-1]; __syncthreads(); Is there anyway to further accelerate this code or avoid divergence ? Thanks

    Read the article

  • Optimize code perfromance when odd/even threads are doing different things in CUDA

    - by Ashraf
    Hi all! I have two large vectors, I am trying to do some sort of element multiplication, where an even-numbered element in the first vector is multiplied by the next odd-numbered element in the second vector .... and where the odd-numbered element in the first vector is multiplied by the preceding even-numbered element in the second vector Ex. vector 1 is V1(1) V1(2) V1(3) V1(4) vector 2 is V2(1) V2(2) V2(3) V2(4) V1(1) * V2(2) V1(3) * V2(4) V1(2) * V2(1) V1(4) * V2(3) I have written a Cuda code to do this: (Pds has the elements of the first vector in shared memory, Nds the second Vector) //instead of using %2 .. i check for the first bit to decide if number is odd/even -- faster if ((tx & 0x0001) == 0x0000) Nds[tx+1] = Pds[tx] * Nds[tx+1]; else Nds[tx-1] = Pds[tx] * Nds[tx-1]; __syncthreads(); Is there anyway to further accelerate this code or avoid divergence .. Thanks

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • Django: Odd mark_safe behaviour?

    - by Mark
    I wrote this little function for writing out HTML tags: def html_tag(tag, content=None, close=True, attrs={}): lst = ['<',tag] for key, val in attrs.iteritems(): lst.append(' %s="%s"' % (key, escape_html(val))) if close: if content is None: lst.append(' />') else: lst.extend(['>', content, '</', tag, '>']) else: lst.append('>') return mark_safe(''.join(lst)) Which worked great, but then I read this article on efficient string concatenation (I know it doesn't really matter for this, but I wanted consistency) and decided to update my script: def html_tag(tag, body=None, close=True, attrs={}): s = StringIO() s.write('<%s'%tag) for key, val in attrs.iteritems(): s.write(' %s="%s"' % (key, escape_html(val))) if close: if body is None: s.write(' />') else: s.write('>%s</%s>' % (body, tag)) else: s.write('>') return mark_safe(s.getvalue()) But now my HTML get escaped when I try to render it from my template. Everything else is exactly the same. It works properly if I replace the last line with return mark_safe(unicode(s.getvalue())). I checked the return type of s.getvalue(). It should be a str, just like the first function, so why is this failing?? Also fails with SafeString(s.getvalue()) but succeeds with SafeUnicode(s.getvalue()). I'd also like to point out that I used return mark_safe(s.getvalue()) in a different function with no odd behavior. The "call stack" looks like this: class Input(Widget): def render(self): return html_tag('input', attrs={'type':self.itype, 'id':self.id, 'name':self.name, 'value':self.value, 'class':self.itype}) class Field: def __unicode__(self): return mark_safe(self.widget.render()) And then {{myfield}} is in the template. So it does get mark_safed'd twice, which I thought might have been the problem, but I tried removing that too..... I really have no idea what's causing this, but it's not too hard to work around, so I guess I won't fret about it.

    Read the article

  • Odd behaviour with PHP's in_array function.

    - by animuson
    I have a function that checks multiple form items and returns either boolean(true) if the check passed or the name of the check that was run if it didn't pass. I built the function to run multiple checks at once, so it will return an array of these results (one result for each check that was run). When I run the function, I get this array result: Array ( [0] => 1 [1] => password [2] => birthday ) // print_r array(3) { [0]=> bool(true) [1]=> string(8) "password" [2]=> string(8) "birthday" } // var_dump The 'username' check passed and the 'password' and 'birthday' checks both failed. Then I am using simple in_array statements to determine which ones failed, like so: $results = $ani->e->vld->simulate("register.php", $checks); die(var_dump($results)); // Added after to see what array was being returned if (in_array("username", $results)) // do something if (in_array("password", $results)) // do something if (in_array("birthday", $results)) // do something The problem I'm having is that the 'username' line is still executing, even those 'username' is not in the array. It executes all three statements as if they were all true for some reason. Why is this? I thought maybe that the bool(true) was automatically causing the function to return true for every result without checking the rest of the array, but I couldn't find any documentation that would suggest that very odd functionality.

    Read the article

  • PHP: Odd behaviour with date_sunset function

    - by Svish
    I'm having a look at the date_sunset function in PHP and have met an issue that I find a bit strange. I have this piece of code: $sunset = date_sunset(mktime(0, 0, 0, 5, 14, 2010), $format, // Format 55.596041, // Latitude 12.992495, // Longitude 90, // Zenith 2 // GMT Offset ); For the three different formats, that would give me: SUNFUNCS_RET_STRING 21:05 SUNFUNCS_RET_DOUBLE 21.095732016315 SUNFUNCS_RET_TIMESTAMP 1273863944 // H:i:s O -> 19:05:44 +0000 Why is the timestamp format ignoring the gmt offset? Is is supposed to be like that? If so what is the reason behind that?

    Read the article

  • program that writes the even and odd numbers

    - by user292489
    enter code herei was writting a program that can read a set of numbers file called dog.txt; and also writes to two file separating odd and even. i was able to compile my program however, the output expected is not the same which was supposed to be even numbers in one file called EVEN, odd numbers in file odd. #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { int i; int even,odd; int num; if (argc != 4) { printf("Usage: executable in_file output_file\n"); exit(0); } FILE *dog = fopen(argv[1], "r"); FILE *feven= fopen(argv[2], "w"); FILE *fodd= fopen (argv[3], "w"); while (fscanf(dog, "%d", &num) != EOF) { if (0==i%2){ i++; printf("even= %d\n", num); } else if(i!=0){ i++; printf("odd= %d\n", num); } } fclose(feven); fclose(fodd); fclose(dog); return 0; } output: even= 1 odd= 2 even= 34 odd= 44 even= 66 odd= 78 even= 94 odd= 21 even= 23 odd= 54 even= 44 odd= 65 even= 78 odd= 68 even= 92

    Read the article

  • Blend Interaction Behaviour gives "points to immutable instance" error

    - by kennethkryger
    I have a UserControl that is a base class for other user controls, that are shown in "modal view". I want to have all user controls fading in, when shown and fading out when closed. I also want a behavior, so that the user can move the controls around.My contructor looks like this: var tg = new TransformGroup(); tg.Children.Add(new ScaleTransform()); RenderTransform = tg; var behaviors = Interaction.GetBehaviors(this); behaviors.Add(new TranslateZoomRotateBehavior()); Loaded += ModalDialogBase_Loaded; And the ModalDialogBase_Loaded method looks like this: private void ModalDialogBase_Loaded(object sender, RoutedEventArgs e) { var fadeInStoryboard = (Storyboard)TryFindResource("modalDialogFadeIn"); fadeInStoryboard.Begin(this); } When I press a Close-button on the control this method is called: protected virtual void Close() { var fadeOutStoryboard = (Storyboard)TryFindResource("modalDialogFadeOut"); fadeOutStoryboard = fadeOutStoryboard.Clone(); fadeOutStoryboard.Completed += delegate { RaiseEvent(new RoutedEventArgs(ClosedEvent)); }; fadeOutStoryboard.Begin(this); } The storyboard for fading out look like this: <Storyboard x:Key="modalDialogFadeOut"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleX)" Storyboard.TargetName="{x:Null}"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <BackEase EasingMode="EaseIn" Amplitude="0.3" /> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.4" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <BackEase EasingMode="EaseIn" Amplitude="0.3" /> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleY)" Storyboard.TargetName="{x:Null}"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <BackEase EasingMode="EaseIn" Amplitude="0.3" /> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.4" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <BackEase EasingMode="EaseIn" Amplitude="0.3" /> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="{x:Null}"> <EasingDoubleKeyFrame KeyTime="0" Value="1" /> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0" /> <EasingDoubleKeyFrame KeyTime="0:0:0.4" Value="0" /> </DoubleAnimationUsingKeyFrames> </Storyboard> If the user control is show, and the user does NOT move it around on the screen, everything works fine. However, if the user DOES move the control around, I get the following error when the modalDialogFadeOut storyboard is started: 'Children' property value in the path '(0).(1)[0].(2)' points to immutable instance of 'System.Windows.Media.TransformCollection'. How can fix this?

    Read the article

  • Chrome and Safari strange behaviour in Javascript

    - by mck89
    Hi, i've written this peace of code: var a=function(){ }; a.name="test"; a.prop="test2"; Now if i debug the code with the console: console.log(a.name); console.log(a.prop); In Firefox i get a.name="test" and a.prop="test2", while in Safari and Chrome i get a.prop="test2" but a.name="". It seems that there's no way to assign a "name" property on a function in Webkit browsers. Do you know why? But the most important thing is, do you know a workaround for that?

    Read the article

  • Behaviour Driven Maturity Model

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/07/02/153326.aspxFor anyone who is interested I have written a small paper about the theory behind the BizTalk Maturity Assessment using a generic framework I have called the "Behaviour Driven Maturity Model" and then how it could be applied to the assessment of other subjects.The paper is on the following link:http://btsmaturity.blob.core.windows.net/behaviour-driven-model/Behaviour%20Based%20Maturity%20Model%20-%20Introduction.pdfIf you would like to create a model for a different subject area based on the details of this paper then I would encourage this as much as possible, all I ask is the following:1. Let us know your doing it so we can help tell people about each others activities2. Make it free to the community3. Reference back to BizTalkMaturity.com as the source of your model

    Read the article

  • Deep insight into the behaviour of the SPARC T4 processor

    - by nospam(at)example.com (Joerg Moellenkamp)
    Ruud van der Pas and Jared Smolens wrote an really interesting whitepaper about the SPARC T4 and its behaviour in regard with certain code: How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study. In this article the authors compare and explain the behaviour of the the UltraSPARC T4 and T2+ processor in order to highlight some of the strengths of the SPARC T-series processors in general and the T4 in particular.

    Read the article

  • What does this code means? Odd & Even

    - by Rodolfo Contreras
    <?php $Odd = "even"; $query = $MySQLi->query("SELECT id, look, username, motto FROM users WHERE rank = '7'"); if($query->num_rows > 0): while($UserRow = $query->fetch_assoc()) { $Odd = ($Odd == "even") ? "odd" : "even"; ?> I'm using a CMS for some web. But i can't stand this code exactly. Well i do know what does it do. But im not sure how to work with it. Mostly this line: $Odd = ($Odd == "even") ? "odd" : "even"; Can you guys help me?

    Read the article

  • Odd MVC 4 Beta Razor Designer Issue

    - by Rick Strahl
    This post is a small cry for help along with an explanation of a problem that is hard to describe on twitter or even a connect bug and written in hopes somebody has seen this before and any ideas on what might cause this. Lots of helpful people had comments on Twitter for me, but they all assumed that the code doesn't run, which is not the case - it's a designer issue. A few days ago I started getting some odd problems in my MVC 4 designer for an app I've been working on for the past 2 weeks. Basically the MVC 4 Razor designer keeps popping me errors, about the call signature to various Html Helper methods being incorrect. It also complains about the ViewBag object and not supporting dynamic requesting to load assemblies into the project. Here's what the designer errors look like: You can see the red error underlines under the ViewBag and an Html Helper I plopped in at the top to demonstrate the behavior. Basically any HtmlHelper I'm accessing is showing the same errors. Note that the code *runs just fine* - it's just the designer that is complaining with Errors. What's odd about this is that *I think* this started only a few days ago and nothing consequential that I can think of has happened to the project or overall installations. These errors aren't critical since the code runs but pretty annoying especially if you're building and have .csHtml files open in Visual Studio mixing these fake errors with real compiler errors. What I've checked Looking at the errors it indeed looks like certain references are missing. I can't make sense of the Html Helpers error, but certainly the ViewBag dynamic error looks like System.Core or Microsoft.CSharp assemblies are missing. Rest assured they are there and the code DOES run just fine at runtime. This is a designer issue only. I went ahead and checked the namespaces that MVC has access to in Razor which lives in the Views folder's web.config file: /Views/web.config For good measure I added <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, <split for layout> Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Linq" /> <add namespace="System.Linq.Expressions" /> <add namespace="ClassifiedsBusiness" /> <add namespace="ClassifiedsWeb"/> <add namespace="Westwind.Utilities" /> <add namespace="Westwind.Web" /> <add namespace="Westwind.Web.Mvc" /> </namespaces> </pages> </system.web.webPages.razor> For good measure I added System.Linq and System.Linq.Expression on which some of the Html.xxxxFor() methods rely, but no luck. So, has anybody seen this before? Any ideas on what might be causing these issues only at design time rather, when the final compiled code runs just fine?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Razor  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Changing mouse behaviour in EOG

    - by Wauzl
    When I open a photo in eog I can easily zoom with the mouse mouse wheel. This is kind of nice but since I have a touchpad with two-fingered scrolling and horizontal scrolling I'd rather scroll in the image and zoom by using Ctrl+Mouse Wheel? I basically want the same behaviour as in evince: Horizontal Scroll and Normal Scroll navigate within the image and Ctrl + Mouse Wheel zooms in and out. Is this possible somehow? [EDIT] I just figured out that what I want is already implemented: I can pan the image by using Ctrl + Mouse Wheel. This is fine as it is the behaviour I want only with the Ctrl inverted. But I cannot pan to the left. Up, down and right work fine. What's the problem?

    Read the article

  • Ouya build experiencing odd graphical artifacts, screen half black

    - by Neeko
    I'm witnessing very odd graphical artifacts when I run my Unity game on Ouya. After the Unity splash screen, the game loads with the screen half black. This seemingly has started occurring out of the blue. It also doesn't occur in the editor or the standalone build, only on Ouya. I can't think of a single reason why this would be happening. If I open the Ouya menu screen and close it, the game returns to normal; somewhat, as there may be some artifacts lingering but the screen isn't half black like in the screen shot above. I know there's not much to go off of, but any insight into why this may be happening is greatly appreciated.

    Read the article

  • How do I customize Alt+Tab behaviour?

    - by David Oneill
    GOAL: I would like to make the little menu that pops up when I press Alt+Tab to gain the ability to jump to a particular window by pointing my mouse at one of the icons. This is the behaviour that my Mac OSX laptop has. The Situation I use Xfce, with the default windows manager (XFwm). I currently have Ubuntu 10.04 and 11.10, although I plan on updating both to XUbuntu 12.04 once the dust settles (probably mid-may?). (Clarification, for my current setup, I installed Ubuntu, then added xfce manually after the fact). So, a series of related questions. How do I set this up? (If this is not possible, please answer the following questions to help me learn) Is the Alt+Tab a function of the desktop environment, or the windows manager, or some other piece? (IE if I switch to using some other tool, could I potentially get this) Are there any customizations that are available either from settings or add-on applications that would allow me to change the behaviour of that menu?

    Read the article

  • Stacked Launcher Item Double Click Behaviour on Alt-tab

    - by Brandon Bertelsen
    Let's say that you have multiple firefox windows open. What you see happen is an additional arrow points to it's icon in the launcher. However, if you double click the icon, it displays all of the windows in a spread out fashion, similar to the behaviour from pushing Super + W, but only for that program group. Is it possible to make this window spreading behaviour occur with alt-tab? PS: No idea what tags I should use, or if the language (jargon) in the title or question is appropriate.

    Read the article

  • Merge two PDF files containing even and odd pages of a book

    - by Yurij73
    I have two searchable PDF documents, say even.pdf and odd.pdf which contain even and odd pages of a book, respectively. I can decompile each PDF to separate files 001.pdf 002.pdf 003.pdf, et cetera. The question is how to merge them? They are both even and odd sequences numbered 1, 2, 3. If the numbering in the decompile process with pdftk were different, e.g. 1, 3, 5 for even and 2, 4, 6 for odd instead of 1, 2, 3, 4, I could simply merge them. Can I do this any other way?

    Read the article

  • Property overwrite behaviour

    - by jeremyj
    I thought it worth sharing about property overwrite behaviour because i found it confusing at first in the hope of preventing some learning pain for the uninitiated with MSBuild :-)The confusion for me came because of the redundancy of using a Condition statement in a _project_ level property to test that a property has not been previously set. What i mean is that the following two statements are always identical in behaviour, regardless if the property has been supplied on the command line -  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>has the same behaviour regardless of command line override as -  <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>  i.e. the two above property declarations have the same result whether the property is overridden on the command line or not.To prove this experiment with the following .proj file -<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" >  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>  <Target Name="Target1">    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target2">    <PropertyGroup>      <PropA>PropA set in Target2</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target3">    <PropertyGroup>      <PropA Condition=" '$(PropA)' == '' ">PropA set in Target3</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target4">    <PropertyGroup>      <PropA Condition=" '$(PropA)' != '' ">PropA set in Target4</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target></Project>Try invoking it using both of the following invocations and observe its output -1)>msbuild blog.proj /t:Target1;Target2;Target3;Target42)>msbuild blog.proj /t:Target1;Target2;Target3;Target4 "/p:PropA=PropA set on command line"Then try those two invocations with the following three variations of specifying PropA at the project level -1)  <PropertyGroup>     <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>   </PropertyGroup> 2)   <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>3)  <PropertyGroup>     <PropA Condition=" '$(PropA)' != '' ">PropA set at project level</PropA>   </PropertyGroup>

    Read the article

  • Odd Profiler Results with EF4

    - by AjarnMark
    I have been doing some testing of using the Microsoft Entity Framework 4 with stored procedures and ran across some really odd results in SQL Server Profiler. The application that is running which uses Entity Framework 4 is a simple Web Application written in C#, and the Entity Data Model is actually contained in a referenced class library of its own.  I’ll write more about my experiences with this later.  For now the question is, why does SQL Profiler think that the stored procedure is running in Master, and not in my application database? While analyzing the effects of using custom helper methods on my EDM classes to call the stored procedure, I decided to run Profiler while I stepped through the code so that I had a clear understanding of exactly when and what calls were made to the SQL Server.  I ran Profiler switching back and forth between the TSQL and TSQL_SP templates.  However, to reduce the amount of results rows I needed to wade through, I set a filter on DatabaseID to be equal to my application’s database.  Each time I ran this, the only thing that I saw was an Audit:Login to the database, but no procedure or T-SQL statements executed, yet I was definitely getting results back to my web page.  I tried other Profiler templates, still filtering on DatabaseID (tangent: I found, at least back in SQL 2000 Profiler, that filtering on DatabaseID was more reliable than filtering on DatabaseName.  Even though I’m now running SQL 2008, that habit sticks with me).  Still no results other than the Login.  Very weird! Finally, I decided to run Profiler with no filtering and discovered that that lines which represent my stored procedure and its T-SQL commands are all marked with DatabaseID = 1, which is Master.  Why in the world would that be?  My procedure is definitely in the application database, and not in Master, and there is nothing funny about the call to the procedure evident in Profiler (i.e. it is not called as MyAppDB.dbo.MyProcName, but rather just dbo.MyProcName).  There must be something funny with the way the Entity Framework is wrapping this call, and I don’t like it…I don’t like it one bit.  My primary PROD server contains 40+ databases on it, and when I need to profile something, I expect to be able to filter based on DatabaseID (for the record, I displayed DatabaseName in my results, too, and it also shows Master). I find the same pattern of everything except the Login showing up as being in Master when I run my version that uses standard LINQ to Entities instead of stored procedures, so that suggests it is not my code, but rather something funny with SQL Server 2008 Profiler or the Entity Framework. If you have any ideas about why this might be so, please comment below.

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >