Search Results

Search found 10978 results on 440 pages for 'collision testing'.

Page 278/440 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Is now the right time to move to .NET 4?

    - by bconlon
    The reason I pose this question is that I'm looking at WPF development and so using the latest version seems sensible. However, this means rolling out the .NET 4 runtime to PCs on old versions of the framework. Windows XP is still the number one O/S (estimated 40%+ market share). To run .NET 4 on XP requires Service Pack 3, and although it is good practice to move to the latest service packs, often large companies are slow to keep up due to the extensive testing involved. In fact, .NET 4 is not installed as standard with any Windows O/S as yet - Windows 7 and 2008 Server R2 have 3.5 installed. This is not quite as big an issue as it was for .NET 3.5 as .NET 4 is significantly smaller as it doesn't include the older runtimes - .NET 3.5 SP1 included .NET 3 and .NET 2 and was 250MB, although this was reduced by doing a web install. The size is also reduced a bit if you target the .NET 4 Client Profile, which should be OK for many WPF applications, and I think this may be rolled out as part of Windows service packs soon. But still, if your application is only 4-5 MB and you need 40-50 MB of Framework it is worth consideration before jumping in and using the new shiny features. #

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • public_html permissions for local development

    - by maGz
    I know this question has popped up a couple times, but I can't seem to find a definitive answer to my issue, so please bear with me. I have Ubuntu Server 12.04 setup in VirtualBox for PHP development and testing (Drupal plus other PHP sites using Yii framework). My question is in 3 parts... 1) If I create a public_html folder under /home/myuser, do I need to give ownership of that folder to the Apache www-data group? If so, are there any specific permissions I should be setting? 755? (Btw, I am following this guide to create the public_html directory and set up multiple virtual hosts per site I create and test) I previously had all of my sites under /var/www, but ran into massive permission denied errors whenever I tried to sFTP to it, either through FileZilla or PhpStorm. This is what I had previously done: sudo chgrp www-data /var/www sudo chmod -R 775 /var/www sudo chmod -R g+s /var/www sudo usermod -G www-data [my_ftp_user] 2) The second part of my question is this: If I create my PHP project and files in Windows through PhpStorm, and then upload via sFTP, will permissions get affected? 3) Once I am satisfied with my developed project, would it be advisable to move and test them under /var/www to see how it would fair in a production-ish environment? I would really appreciate the help and advice here. I'm learning more as I go along, but dealing with Linux files and permissions is a bit of a new ballgame for me! Thank you

    Read the article

  • TraceTune supports uploading Zip files

    - by Bill Graziano
    I’ve been using the online version of ClearTrace more and more lately.  When I get to a new client it’s just much easier to upload a trace file rather than install ClearTrace. That means I’ve finally been adding more features to it.  The two latest features are around ease of use. You can now upload a ZIP file that contains a trace file.  Trace files are already somewhat compressed.  Putting it in a ZIP file further compresses it by a factor of 8X or 9X in my testing. That means you can start with a 100MB trace and end up with a 10Mb-12MB ZIP file to upload.  I’m consistently able to get over 150,000 events in a 100MB ZIP file.  That gives me a pretty good look at a system. The second part of this is that files are now processed asynchronously.  After you upload a file you’ll be taken to a processing page that updates every few seconds with the number of rows processed.  It generally takes under a minute to process a 100MB trace file but I *hated* staring at a blank screen. Give TraceTune a try.  It’s getting easier to use every day.

    Read the article

  • What is the recommended way to empty a SSD?

    - by Lekensteyn
    I've just received my new SSD since the old one died. This Intel 320 SSD supports TRIM. For testing purposes, my dealer put malware, err, Windows on it. I want to get rid of it and install Kubuntu on it. It does not have to be a "secure wipe", I just need the empty the disk in the mosy healthy way. I believe that dd if=/dev/zero of=/dev/sda just fills the blocks with zeroes and thereby taking another write (correct me if I'm wrong). I've seen the answer How to enable TRIM, but it looks like it's suited for clearing empty blocks, not wiping the disk. hdparm seems to be the program to do it, but I'm not sure if it clears the disk OR cleans empty blocks. From its manual page: --trim-sector-ranges For Solid State Drives (SSDs). EXCEPTIONALLY DANGEROUS. DO NOT USE THIS OPTION!! Tells the drive firmware to discard unneeded data sectors, destroying any data that may have been present within them. This makes those sectors available for immediate use by the firmware's garbage collection mechanism, to improve scheduling for wear-leveling of the flash media. This option expects one or more sector range pairs immediately after the option: an LBA starting address, a colon, and a sector count, with no intervening spaces. EXCEPTIONALLY DANGEROUS. DO NOT USE THIS OPTION!! E.g. hdparm --trim-sector-ranges 1000:4 7894:16 /dev/sdz How can I make all blocks appear as empty using TRIM?

    Read the article

  • Is white the best base color to start with when planning to shade sprites within Unity?

    - by SpartanDonut
    I'm looking into prototyping a game in Unity which will consist of solid square sprites / tiles. I figure I can represent different types of objects with different colors for each of the tiles in the game. I figure that I can import a single square sprite and shade it appropriately in Unity as opposed to imported squares of many different colors. My experience with adjusting the hue and saturation within Photoshop shows that white is not an easy color to change as things that are white often stay white. My testing in Unity shows that I can change the "color" of a sprite to anything other than white and the sprite is seemingly shaded appropriately, despite what I would have thought given my Photoshop experience. Since white objects do seem to take on the appropriate color shading when changed within Unity my gut tells me that this is the best base color to begin with, meaning that I can import a single white square sprite and simply adjust the color to represent different objects and object states. Is a white sprite actually the best color sprite to begin with and why does something like this work in Unity as opposed to adjusting the hue and saturation within Photoshop?

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • Skin Object Tokens for DotNetNuke 5 - 8 Videos

    In this tutorial we demonstrate how to use Skin Object Tokens in DotNetNuke v5 and above. Skin Object tokens are a new skinning method introduced in DotNetNuke 5 for adding tokens into a DotNetNuke skin. A Skin Object Token is a web user control, it covers skin elements such as the logo, menu, search, login links, date, copyright, languages, links, banners, privacy, terms of use etc. This new Object token method has been introduced into DotNetNuke with the idea of making it simpler to add a skin object into a DotNetNuke skin. The videos contain: Video 1 - Introduction to HTML Object Token Skinning Video 2 - Basic Styling of a Skin and Creating Multiple Content Panes Video 3 - Styling, Control Panel, Login and Register Skin Object Tokens Video 4 - Packaging, Installing, Testing and Viewing the ASCX Version of the Skin Video 5 - Viewing the Attributes for Skin Object Tokens, Logo Token, Search Token Video 6 - Breadcrumb Token, Text Token and Localization, Links Token Video 7 - More Skin Tokens and Token Replacement Video 8 - Demonstration of the Object Tokens and Bug Fixing Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Just installed 13.10 and everything is fine except I cannot connect to the internet. Any thoughts?

    - by razorccatu
    I just installed Ubuntu 13.10 on my HP G62 laptop and the install went smoothly. I did the install off of a USB drive after trying ubuntu. While I was testing it, I connected to my Wireless with out issue and surfed a little. After the install, no wireless. I can still connect to my wireless network (at least it tells me I'm connected at full strength) but No servers can be found. I attempted to ping Google to no avail and I attempted to ping my router to no avail. I tried to then hard wire the machine and once again it told me that I was connected but I was not. When I ran dmesg, I got the following message: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolvabe. Please install nss-hostname! Is the hostname the issue? if so, how do I resolve it with out internet connection? If it's not the issue, how do I move forward? Thanks for any help. EDIT. I forgot to attach the image of my ifconfig if that might help.

    Read the article

  • Rich snippet for Google Custom Search - Schema.org

    - by Joesoc
    I am trying to extract the book URL from a link using microdata. The format is specified in schema.org. Here is my html. <div class="col-sm-4 col-md-3" itemscope itemtype="http://schema.org/Book"> <div class="thumbnail"> <img src="{{ book.thumbnailurl }}" itemprop="thumbnailUrl" style="width: 100px;height: 200px;"> <div class="caption"> <h4><span itemprop="name">{{ book.name }}</span> - <span itemprop="author">{{ book.author }}</span></h4> <p><span itemprop="about"> {{ book.about }}</span></p> <p> <a href="{{ book.url }}" itemprop="url" onclick="trackOutboundLink(‘{{ book.name }}’);"> <button type="button" class="btn btn-default btn-md"> <span class="glyphicon glyphicon-book"></span>Read </button> </a> </p> </div> </div> </div> When I use google snippet testing tool the JSON API returns book as a html link. However when I make the call in javascript the value of url is text("Read"). What am i missing ?

    Read the article

  • Ubuntu server is dropping SSH connections, then not allowing me to log back on

    - by wilhil
    I have an ESX box which I have loaded with two Ubuntu Server machines. During setup, I chose no additional packages to install as I just wanted a lightweight machine for testing. The first thing I did was change the root password via sudo passwd After ESX got on my nerves through lag, I decided to install OpenSSH via apt-get install openssh-server. It did it's business, and I then opened putty and could connect in to both machines fine. The first time it connected, it asked me to add the ssh key as obviously it did not know it. Anyway, the second server is working flawlessly, but, the first seems to be giving me trouble. I was in the middle of typing a sentence when it kicked me off for no reason and when I tried to reconnect, putty gave me a warning that the ssh key had changed and it is potentially dangerous. I attempted to log in anyway and it did not work, just the standard access denied message. Using the second machine, I SSHed in to the first machine and it worked straight away, I then killed the SSH sessions (and possibly SSH server), I then reconnected via putty and I again received the security warning message, but, it allowed me to log on fine. ... I thought "glitch" and nothing more of it, but, it just happened again! I really do not understand this and was hoping someone here can help?

    Read the article

  • I installed ubuntu, the installer told me to reboot afterwards. I dd, and now linux wont boot

    - by mandy
    Im trying to dual boot between mac 10.6.8 and ubuntu 11.10. I have a macbook pro 8,1. So i installed from a 10.04 disk because the install window makes more sense to me, and it doesnt give me errors or anything. Also, any versions of ubuntu after that dont boot from disk for whatever reason. (i think its having to do with the efi boot thing. i have to get ubuntu 11.10 to boot from a usb with folders bootefiboot.iso) Then my plan after that was after the ubuntu 10.04 install took care of all the swap and stuff for me without being messy, to upgrade to 11.10. So here i have 10.04 booting successfully back and forth from mac osx no problem. I put in my 11.10 usb and the installer gives me the option to "update 10.04 to 11.10" bingo, jackpot, thats what i want. Everything proceeds as normal, as EVERY OTHER install of ubuntu i have ever done, then the installer finishes and says HEY! im finished! Continue testing or reboot now! So i reboot, and what do i get??? A black screen that says the file system isnt found, to enter a boot disk and press any key. WHAT THE HELL????? so i boot the 11.10 installer again from usb, and select "erase 11.10 and install 11.10", installer proceeds normally, and asks me to reboot. I reboot and get the SAME THING. Please, someone, help me get this right here. This is my first time actually dual booting between mac and linux. Usually i just wipe off osx completely and install ubuntu but i actually need to keep my mac partition this time. I have successfully installed 11.10 on this machine before, but that was when i did a clean install. Help?

    Read the article

  • An entry-level programmer's best option [on hold]

    - by user134409
    I am facing a puzzle and I am not sure the best way to make a decision. In my spare time besides playing video games I got around to develop some games, nothing fancy, just small projects to get a better grasp at programming. After I finished college and got my BA in Computer Science, I got a job as web developer at a small firm. The next few months were very stressful as I had no previous experience and tried my best to make up for it. But after 6 months my boss told me I was inefficient and not very independent and let me go. To my credit, the help from the senior was very limited, I did learn a lot but I have learned by myself. For example they told me to do a UI in BackboneJS and I took me a while but I got it working (even if it was poorly designed). But I managed to do it all by myself because my senior was very busy and he did not have time even for my questions. Now I have found a new job again in web development but I am very afraid of what is going to happen next. I am afraid because I don't want to take the job and then be fired again after a couple of months, I get the feeling that this will be very bad on my CV, job hopping is like a red flag. They want to hire me but I am aware that they are working with new technologies and maybe I will end up not coping with it. So the question is: Should a entry-level programmer be better off with a starting job in QA, testing and work his way from there? I did learn allot from my first job but it was a moral blow when they decided to fire me. I do have a low self-esteem and I know my skills as a programmer are not that great. But I like programming and want to get better and I want to have a long career in it so that basically my pickle. Thank you in advance for the answers.

    Read the article

  • Advice and resources on collaborative environments

    - by Tjaart
    I need some advice on collaborative software environments. More specifically, I am looking for books and reference materials that can aid me in understanding team and code structures and the interactions thereof. In other words books, blogs or white papers explaining: Different strategies for structuring teams that share common code between each other but have distinct individual functions? To summarise my question I would like to know what would be a good source of knowledge if I were to set up teams in an organisation that shared code but each unit still remained autonomous. I have done some research on this subject and explored: code review tools, distributed VCS, continuous integration tools, Unit testing automation. The tough part about implementing these tools are to determine where a good place would be to start, which tools are low hanging fruit, which tools or methods provide higher success rates. If someone asks me about code quality reference I point them to Code Complete. I am looking for an equivalent guide on software team structures and tools to make this equation work better. I realise that this question is quite vague but it arose as "we need to share code between teams without breaking each others stuff and causing management headaches and reams of red tape" The answer is definitely not simple and requires changes on many levels, hence the question. If the question is too vague please vote to close or delete. I would accept any good starting point as an answer.

    Read the article

  • JAVA Gui on Hello World [closed]

    - by user58892
    I am designing, implementing, testing, and debuging a GUI-based version of a “Hello, World!” program in a JFrame that includes a JLabel that reads “Hello, World!” and I am trying to use a layout manager, and an Exit button to close the program. Here's what I have so far, I would really apreciate if you could help on it syntax. I am 90% done but tried hard and it couldn't run. import java.awt.*; // Needed for flow layout manager import javax.swing.*; //All swing components live in the javax.swing package import javax.swing.JButton; //to recognize buttons import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JTextField; public class HelloWorld { public static void main(String[] args) { //creates the label. The JLabel constructor //takes an optional argument which set the text of the label /* The text will be aligned with the center of the frame * otherwise it will align on the left. */ JLabel label= new JLabel("Hello World!"); new FlowWindow(); label.setHorizontalAlignment (SwingConstants.CENTER); JFrame frame = new JFrame("Hello"); //create exit button JButton button1 = new JButton("Exit"); //Add exit button to the content pane. add(button1); frame.add(label); frame.setSize(300, 300); frame.setVisible(true); frame.setLocationRelativeTo(null); frame.toFront(); } public static void FlowWindow() { //Add a new FlowLayout()); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } }

    Read the article

  • Useful WatiN Extension Methods

    - by Steve Wilkes
    I've been doing a fair amount of UI testing using WatiN recently – here’s some extension methods I've found useful. This checks if a WatiN TextField is actually a hidden field. WatiN makes no distinction between text and hidden inputs, so this can come in handy if you render an input sometimes as hidden and sometimes as a visible text field. Note that this doesn't check if an input is visible (I've got another extension method for that in a moment), it checks if it’s hidden. public static bool IsHiddenField(this TextField textField) { if (textField == null || !textField.Exists) { return false; } var textFieldType = textField.GetAttributeValue("type"); return (textFieldType != null) && textFieldType.ToLowerInvariant() == "hidden"; } The next method quickly sets the value of a text field to a given string. By default WatiN types the text you give it into a text field one character at a time which can be necessary if you have behaviour you want to test which is triggered by individual key presses, but which most of time is just painfully slow; this method dumps the text in in one go. Note that if it's not a hidden field then it gives it focus first; this helps trigger validation once the value has been set and focus moves elsewhere. public static void SetText(this TextField textField, string value) { if ((textField == null) || !textField.Exists) { return; } if (!textField.IsHiddenField()) { textField.Focus(); } textField.Value = value; } Finally, here's a method which checks if an Element is currently visible. It does so by walking up the DOM and checking for a Style.Display of 'none' on any element between the one on which the method is invoked, and any of its ancestors. public static bool IsElementVisible(this Element element) { if ((element == null) || !element.Exists) { return false; } while ((element != null) && element.Exists) { if (element.Style.Display.ToLowerInvariant().Contains("none")) { return false; } element = element.Parent; } return true; } Hope they come in handy

    Read the article

  • Authenticate with Django 1.5

    - by gorjuce
    I'm currently testing django 1.5 and a custom User model, but I've some problems. I've created a User class in my account app, which looks like: class User(AbstractBaseUser): email = models.EmailField() activation_key = models.CharField(max_length=255) is_active = models.BooleanField(default=False) is_admin = models.BooleanField(default=False) USERNAME_FIELD = 'email' I can correctly register a user, who is stored in my account_user table. Now, how can I log in? I've tried with: def login(request): form = AuthenticationForm() if request.method == 'POST': form = AuthenticationForm(request.POST) email = request.POST['username'] password = request.POST['password'] user = authenticate(username=email, password=password) if user is not None: if user.is_active: login(user) else: message = 'disabled account, check validation email' return render( request, 'account-login-failed.html', {'message': message} ) return render(request, 'account-login.html', {'form': form}) I can correctly register a new User My forms.py which contains my register form class RegisterForm(forms.ModelForm): """ a form to create user""" password = forms.CharField( label="Password", widget=forms.PasswordInput() ) password_confirm = forms.CharField( label="Password Repeat", widget=forms.PasswordInput() ) class Meta: model = User exclude = ('last_login', 'activation_key') def clean_password_confirm(self): password = self.cleaned_data.get("password") password_confirm = self.cleaned_data.get("password_confirm") if password and password_confirm and password != password_confirm: raise forms.ValidationError("Password don't math") return password_confirm def clean_email(self): if User.objects.filter(email__iexact=self.cleaned_data.get("email")): raise forms.ValidationError("email already exists") return self.cleaned_data['email'] def save(self): user = super(RegisterForm, self).save(commit=False) user.password = self.cleaned_data['password'] user.activation_key = generate_sha1(user.email) user.save() return user My question is: Why does authenticate give me None? I know I'm trying to authenticate() with an email as username but is that not one of the reasons to use a custom User model?

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

  • Finding back to an old project that was turned upside-down by the developer. Your workflow?

    - by Kreativrandale
    after some time I'm asked to work on a heavy web-project I did (layout, html/css) about a year ago. There are some changes that have to be made, basically some css and js stuff. By now the whole project was turned upside down by the developer. It gives me a hard time to connect to the work of him, especially because my old files and file-structure won't work anymore. Thats why I need a up-to-date working-environment, but I don't want to change the files on the server directly. Need some testing and improving while doing this. So, what is your workflow in such a case? Thought about copying the whole/parts of the server to a own homeserver. But even that will be a big task for me (I'm more the front-end-guy). Would be great if theres a way to shrink it down (php, mysql,...), since I only need to change some css/html javascript. Are there any tools available? Love to hear how you handle such situations. Thanks a lot!

    Read the article

  • High Availability

    - by mattjgilbert
    Udi Dahan presented at the UK Connected Systems User Group last night. He discussed High Availability and pointed out that people often think this is purely an infrastructure challenge. However, the implications of system crashes, errors and resulting data loss need to be considered and managed by software developers. In addition a system should remain both highly reliable (backwardly compatible) and available during deployments and upgrades. The argument is that you cannot be considered highly available if your system is always down every time you upgrade. For our recent BizTalk 2009 upgrade we made use of our Business Continuity servers (note the name, rather than calling them Disaster Recovery servers ? ) to ensure our clients could continue to operate while we upgraded the Production BizTalk servers. Then we failed back to the newly built 2009 environment and rebuilt the BC servers. Of course, in the event of an actual disaster there was a window where either one or the other set were not available to take over – however, our Staging machines were already primed to switch to production settings, having been used for testing the upgrade in the first place.   While not perfect (the failover between environments was not automatic and without some minimal outage) planning the upgrade in this way meant BizTalk was online during the rebuild and upgrade project, we didn’t have to rush things to get back on-line and planning meant we were ready to be as available as we could be in the event of an actual disaster.

    Read the article

  • Deploying InfoPath forms &ndash; idiosyncrasies

    - by PointsToShare
    Well, I have written a sophisticated PowerShell script to expedite the deployment of InfoPath forms - .XSN file.  Along the way by way of trial and error (mostly error and error), I discovered a few little things. Here they are. •    Regardless of how the install command is run – PowerShell or the GUI in Central Admin – SharePoint enwraps the XSN inside a solution – WSP, then installs and deploys the solution. •    The solution is named by concatenating “form-“ with the first 16 characters (or less if the file name is shorter than 16) of the file name and the required WSP at the end. So if the form name was MyInfopathForm.xsn the solution name will be form-MyInfopathForm.wsp, but for WithdrawalOfRequestsForRefund.xsn it will be named form-WithdrawalOfRequ.wsp •    It only gets worse! Had there already been a solution file with the same name, Microsoft appends a three digit number to the name, like MyInfopathForm-123.wsp. Remember a digit is a finger, I suspect a middle finger, so when you deploy the same form – many versions of it, or as it was in my case – testing a script time and again, you’ll end up with many such digit (middle finger) appended solutions, all un-deployed except the last one. This is not a bug. It’s a feature!   Well, there are ways around it. When by hand, remove the solution from the solution store before deploying the form again. In the script I do the same thing. And finally - an important caveat; Make sure that all your form names are unique in the first 16 characters. If you also have a form with the name forWithdrawalOfRequestForRelief.xsn, you’re in trouble! That’s all folks!

    Read the article

  • Browser Statistics for Geekswithblogs.net

    - by Jeff Julian
    I love Google Analytics!  It helps me so much during my day-to-day maintenance of Geekswithblogs.net and our other sites.  I can see so much data about our visitors and come up with new ways of delivering more content to our readers so they can really get the most out of our community.  Browsers and Browser Versions is a big indicator for me to help decide what we can support and what we need to be testing with.  The clear browsers of choice right now are Chrome, IE, and Firefox taking up 94.1%.  The next browser is Safari at 2.71%.  What this really brings to my attention besides I better test well with Chrome, Firefox, and IE is that we are definitely missing an opportunity with Mobile devices.  We really need to kick up the heat when it comes to a mobile presence with Geekswithblogs.net as a community and the blogs that are on this site.  We need easy discovery of new content and easy tracking of what I like.  I am definitely on mission to make this happen and it will be a phased approach, but I want to see these numbers changes since most of us have 2 or 3 mobile devices we use for Social activities, but tools are lacking for interacting with technical data besides RSS readers. Technorati Tags: Mobile,Geekswithblogs.net,Browsers

    Read the article

  • After reboot allocated node gets commissioned again

    - by cloudfan
    i had set up a maas with juju and deployed Openstack into it for testing. During my vacation i shut down all computers. Afterwards i started at first the maas server, then the node where juju was bootstrapped and juju-gui was deployed to. Sadly the node got commissioned again and so all my deployments are gone. I decomissioned the according node from the maas and bootstrapped it again. Afterwards i tested again juju bootstrapping the node, shutting down both nodes and starting them in the same order again. The Juju node gets commisioned again. After bootstrapping everything looked fine in the MAAS GUI (node was set to allocated to root, which was also the case after the restart) and also the JUJU GUI was available and juju status worked fine. Before my vacation i also had some other nodes deployed through juju. They all seem to be still available and have not been commisioned again. Do you have any ideas what might have happened? Is there any issue with a bootstrapped juju node and the commisioning? Any help or hints on what i could check are appreciated! Thank in advance for your help!

    Read the article

  • Cannot create a neutral unit with a trigger

    - by Xitcod13
    I've been playing around with the starcraft UMS (Use map settings) for a while and usually i figure things out pretty quickly when im stuck. Alas not this time. I'm trying to place a neutral unit (player 12) using a trigger. It refuses to work. I'm using Scmdraft 2.0 as my editor (but i cant get it to work in other editors either) (all neutral units placed before the game starts are visible and all other triggers work fine. Also i created a text msg and it does displays it in-game so the trigger triggers ) For testing I created a trigger that looks like this: Player: neutral (i tried neutral players player 1 and all players as well) Condition: -always Action: -Create *1 Terran Medic* at '*location 022*' for *Neutral* (also tried neutral players) When I start the game nothing happens. Here is what I tried: I tried placing a start location for neutral player (player 12) I tried changing the owner under map properties of player 12 to neutral and computer from unused which was the default. Although it seems like it should be a common enough problem, I don't see it in any FAQ and I cant find anything about it when I Google it. Thanks in advance.

    Read the article

  • MVVM Light V4 preview (BL0014) release notes

    - by Laurent Bugnion
    I just pushed to Codeplex an update to the MVVM Light source code. This is an early preview containing some of the features that I want to release later under the version 4. If you find these features useful for your project, please download the source code and build the assemblies. I will appreciate greatly any issue report. This version is labeled “V4.0.0.0/BL0014”. The “BL” string is an old habit that we used in my days at Siemens Building Technologies, called a “base level”. Somehow I like this way of incrementing the “base level” independently of any other consideration (such as alpha, beta, CTP, RTM etc) and continue to use it to tag my software versions. In Microsoft parlance, you could say that this is an early CTP of MVVM Light V4. Caveat The code is unit tested, but as we all know this does not mean that there are no bugs This code has not yet been used in production. Again, your help in testing this is greatly appreciated, so please report all bugs to me! What’s new? The following features have been implemented: Misc Various “maintenance work”. All WPF assemblies (that is .NET35 and .NET4) now allow partially trusted callers. It means that you can use them in am XBAP in partial trust mode. Testing Various test updates Added Windows Phone 7 unit tests Note: For Windows Phone 7, due to an issue in the unit test framework, not all tests can be executed. I had to isolate those tests for the moment. The error was reported to Microsoft. ViewModelBase The constructor is now public to allow serialization (especially useful on the phone to tombstone the state). ViewModelBase.MessengerInstance now returns Messenger.Default unless it is set explicitly. Previously, MessengerInstance was returning null, which was complicating the code. Two new ways to raise the PropertyChanged event have been added. See below for details. Messenger Updated the IMessenger interface with all public members from the Messenger class. Previously some members were missing. A new Unregister method is now available, allowing to unregister a recipient for a given token. RelayCommand RaiseCanExecuteChanged now acts the same in Windows Presentation Foundation than in Silverlight. In previous versions, I was relying on the CommandManager to raise the CanExecuteChanged event in WPF. However, it was found to be too unreliable, and a more direct way of raising the event was found preferable. See below for details. Raising the PropertyChanged event A very much requested update is now included: the ability to raise the PropertyChanged event in a viewmodel without using “magic strings”. Personally, I don’t see strings as a major issue, thanks to two features of the MVVM Light Toolkit: In the DEBUG configuration, every time that the RaisePropertyChanged method is called, the name of the property is checked against all existing properties of the viewmodel. Should the property name be misspelled (because of a typo or refactoring), an exception is thrown, notifying the developer that something is wrong. To avoid impacting the performance, this check is only made in DEBUG configuration, but that should be enough to warn the developers in case they miss a rename. The property name is defined as a public constant in the “mvvminpc” code snippet. This allows checking the property name from another class (for example if the PropertyChanged event is handled in the view). It also allows changing the property name in one place only. However, these two safeguards didn’t satisfy some of the users, who requested another way to raise the PropertyChanged event. In V4, you can now do the following: Using lambdas private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(() => MyProperty); } } This raises the property changed event using a lambda expression instead of the property name. Light reflection is used to get the name. This supports Intellisense and can easily be refactored. You can also broadcast a PropertyChangedMessage using the Messenger.Default instance with: private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(() => MyProperty, oldValue, value, true); } } Using no arguments When the RaisePropertyChanged method is called within a setter, you can also omit the property name altogether. This will fail if executed outside of the setter however. Also, to avoid confusion, there is no way to broadcast the PropertyChangedMessage using this syntax. private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(); } } The old way Of course the “old” way is still supported, without broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); } } And with broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(MyPropertyName, oldValue, value, true); } } Performance considerations It is notorious that using reflection takes more time than using a string constant to get the property name. However, after measuring for all platforms, I found the differences to be very small. I will measure more and submit the results to the community for evaluation, because some of the results are actually surprising (for example, using the Messenger to broadcast a PropertyChangedMessage does not significantly increase the time taken to raise the PropertyChanged event and update the bindings). For now, I submit this code to you, and would be delighted to hear about your own results. Raising the CanExecuteChanged event manually In WPF, until now, the CanExecuteChanged event for a RelayCommand was raised automatically. Or rather, it was attempted to be raised, using a feature that is only available in WPF called the CommandManager. This class monitors the UI and when something occurs, it queries the state of the CanExecute delegate for all the commands. However, this proved unreliable for the purpose of MVVM: Since very often the value of the CanExecute delegate changes according to non-UI events (for example something changing in the viewmodel or in the model), raising the CanExecuteChanged event manually is necessary. In Silverlight, the CommandManager does not exist, so we had to raise the event manually from the start. This proved more reliable, and I now changed the WPF implementation of the RaiseCanExecuteChanged method to be the exact same in WPF than in Silverlight. For instance, if a command must be enabled when a string property is set to a value other than null or empty string, you can do: public MainViewModel() { MyTestCommand = new RelayCommand( () => DoSomething(), () => !string.IsNullOrEmpty(MyProperty)); } public const string MyPropertyName = "MyProperty"; private string _myProperty = string.Empty; public string MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); MyTestCommand.RaiseCanExecuteChanged(); } } Logo update I made a minor change to the logo: Some people found the lack of the word “light” (as in MVVM Light Toolkit) confusing. I thought it was cool, because the feather suggests the idea of lightness, however I can see the point. So I added the word “light” to the logo. Things should be quite clear now. What’s next? This is only the first of a series of releases that will bring MVVM Light to V4. In the next weeks, I will continue to add some very requested features and correct some issues in the code. I will probably continue this fashion of releasing the changes to the public as source code through Codeplex. I would be very interested to hear what you think of that, and to get feedback about the changes. Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >