Search Results

Search found 21963 results on 879 pages for 'lance may'.

Page 613/879 | < Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >

  • How should I land my next consulting gig? [closed]

    - by MrOodles
    For the last couple of years, I've been working on speculative projects, expanding my skillset with side work, and paying the bills while having a blast consulting for startups. However, for a number of personal reasons, I need to spend the next 6-9 months maximizing my cash income. I want to put this into effect starting in early August. So that means I have one month to put the necessary client list/portfolio/resume together to start making this happen. As a programmer, I am very proficient in building Django web apps. I can write the necessary SQL, python, javascript, and css to build every part of a Django app, and then do the system administration necessary to deploy on AWS using EC2. I can also rig up a CDN to work seemlessly with the app using S3 and Cloudfront. I have built GIS applications using GeoDjango and PostGIS, and I have constructed social video apps by implementing Encoding.com as a service to prepare raw video files for consumption on the web. I am also moderately proficient in programming PHP, Java, and C#. I have built web apps in PHP, and desktop apps in Java and C#. I have dabbled with Android applications and iPhone apps, but nothing I would show off. I have experience doing SEO, social media marketing, and content marketing. Many of my clients have needed their apps promoted after they were built, and I was always happy to oblige when I could. I have also worked with biometrics technology including fingerprints for government contractors. This was as much a business analyst role as it was a programming gig, as I had to help answer RFPs, make checklists, and work around reems and reems of regulations to build applications that met very large bureaucratic requirements. I only have two real requirements for my next gig(s): 1) Work remotely. I live in North East Ohio, and I don't plan on leaving, but I wouldn't mind traveling one or two weeks out of every month to service clients who need on-site help. 2) $60.00hr-$∞ USD contracting rate. So what should I do for the next 30 days to achieve this? Should I target some large company and learn the requisite buzzwords to impress them? Should I learn some new language or technology? Polish some skill that I already have? Should I build something using my current skillset, or with some new technology? Should I put a website for my consultancy together to market myself? Should I do that using latest technology x, y, and z? Or should I just slap something up on Tumblr? I'm willing to do anything (moral) over the next 4 weeks to put myself into a position to maximize my income, and I'm open to all and every idea Programmers users may have. Let me hear them.

    Read the article

  • Survey: Your Plans for Adopting New Firefox Releases?

    - by Steven Chan (Oracle Development)
    Mozilla is committing to releasing new Firefox versions every six weeks.  Mozilla released Firefox 5 this week.  With this release, Mozilla states that Firefox 4 is End-of-Life and will not receive any additional security updates.  In a comment thread posted on to a Mike Kaply's blog article discussing these new Firefox policies, Asa Dotzler from Mozilla stated: ... Enterprise has never been (and I’ll argue, shouldn’t be) a focus of ours. Until we run out of people who don’t have sysadmins and enterprise deployment teams looking out for them, I can’t imagine why we’d focus at all on the kinds of environments you care so much about.  In a later comment, he added: ... A minute spent making a corporate user happy can better be spent making many regular users happy. I’d much rather Mozilla spending its limited resources looking out for the billions of users that don’t have enterprise support systems already taking care of them. Asa then confirmed that every new Firefox release will put the previous one into End-of-Life: As for John’s concern, “By the time I validate Firefox 5, what guarantee would I have that Firefox 5 won’t go EOL when Firefox 6 is released?” He has the opposite of guarantees that won’t happen. He has my promise that it will happen. Firefox 6 will be the EOL of Firefox 5. And Firefox 7 will be the EOL for Firefox 6.  He added: “You’re basically saying you don’t care about corporations.” Yes, I’m basically saying that I don’t care about making Firefox enterprise friendly. Kev Needham, Channel Manager at Mozilla later stated to PC Mag: The Web and Web browsers continue to evolve rapidly. Mozilla's focus is on providing users with the best Web experience possible, and Firefox needs to evolve at the pace the Web's users and developers expect. By releasing small, focused updates more often, we are able to deliver improved security and stability even as we introduce new features, which is better for our users, and for the Web.We recognize that this shift may not be compatible with a large organization's IT Policy and understand that it is challenging to organizations that have effort-intensive certification polices. However, our development process is geared toward delivering products that support the Web as it is today, while innovating and building future Web capabilities. Tying Firefox product development to an organizational process we do not control would make it difficult for us to continue to innovate for our users and the betterment of the Web.  Your feedback needed for E-Business Suite certifications  Mozilla's new support policy has significant implications for enterprise users of Firefox with Oracle E-Business Suite.  We are reviewing the implications for our certification and support policies for Firefox now.  It would be very helpful if you could let me know about your organisation's plans for Firefox in light of this new information.  Please feel free to drop me a private email, or post a comment here if that's appropriate. 

    Read the article

  • Office 2010 Professional Plus (Top 10 reasons to upgrade)

    - by mbcrump
    Being a huge nerd, I decided that I would go ahead and upgrade to the latest and greatest office. That being, Office 2010 Professional Plus. The biggest concern that I had was loosing all my mail settings from Outlook 2007. Thankfully, it upgrade gracefully and worked like a charm. So lets start this top 10 list. 1) You can upgrade without fear of loosing all your stuff! As you can tell by the screenshot below, you can select what you want to do. I selected to remove all previous versions.    2) Outlook conversations: Just like GMail, you can now group emails by conversations. This is simply awesome and a must have. 3) The ability to ignore conversations. If you are on a email thread that has nothing to do with you. Simply “ignore” the conversation and all emails go into the deleted folder. 4) Quick Steps, do you send an email to the same team member or group constantly. With quick steps, its just one click away. 5) Spell check in the Subject line! 6)  Easier Screenshots, built in just click the button. No more ALT-Printscreen for those that are not aware of the awesome SnagIT 10 that's out. 7) Open in protected view. When you open a document from an email attachment, it lets you know the file may be unsafe. You can click a button to enable editing. This is great for preventing macros.       8) Excel has always had a variety of charts and graphs available to visually depict data and trends. With Excel 2010, though, Microsoft has added a new feature called Sparklines, which allows you to place a mini-graph or trend line in a single cell. The Sparklines are a cool way to quickly and simply add a visual element without having to go through the effort of inserting a graph or chart that overwhelms the worksheet. 9) Contact actions. If you hover over a name in the form or fields on an email, you get a popup giving you several actions you can perform on the person such as adding them to your Outlook contacts, scheduling a meeting, viewing their stored contact information if they are already in your contacts, sending an instant message or even starting a telephone call. 10) Windows 7 Task Bar Context Menu – I love the jumplist. I don’t know how much that I would actually use it but it just rocks.

    Read the article

  • TransportWithMessageCredential & Service Bus – Introduction

    - by Michael Stephenson
    Recently we have been working on a project using the Windows Azure Service Bus to expose line of business applications. One of the topics we discussed a lot was around the security aspects of the solution. Most of the samples you see for Windows Azure Service Bus often use the shared secret with the Access Control Service to protect the service bus endpoint but one of the problems we found was that with this scenario any claims resulting from credentials supplied by the client are not passed through to the service listening to the service bus endpoint. As an example of this we originally were hoping that we could give two different clients their own shared secret key and the issuer for each would indicate which client it was. If the claims had flown to the listening service then we could check that the message sent by client one was a type they are allowed to send. Unfortunately this claim isn't flown to the listening service so we were unable to implement this scenario. We had also seen samples that talk about changing the relayClientAuthenticationType attribute would allow you to authenticate the client within the service itself rather than with ACS. While this was interesting it wasn't exactly what we wanted. By removing the step where access to the Relay endpoint is protected by authentication against ACS it means that anyone could send messages via the service bus to the on-premise listening service which would then authenticate clients. In our scenario we certainly didn't want to allow clients to skip the ACS authentication step because this could open up two attack opportunities for an attacker. The first of these would allow an attacker to send messages through to our on-premise servers and potentially cause a denial of service situation. The second case would be with the same kind of attack by running lots of messages through service bus which were then rejected the attacker would be causing us to incur charges per message on our Windows Azure account. The correct way to implement our desired scenario is to combine one of the common options for authenticating against ACS so the service bus endpoint cannot be accessed by an unauthenticated caller with the normal WCF security features using the TransportWithMessageCredential security option. Looking around I could not find any guidance on how to implement this correctly so on the back of setting this up I decided to write a couple of articles to walk through a couple of the common scenarios you may be interested in. These are available on the following links: Walkthrough - Combining shared secret and username token Walkthrough – Combining shared secret and certificates

    Read the article

  • Nvidia driver overscan issue second monitor via dvi-d cable

    - by benmichael
    Ok, I know that I have a bit of a bizarre setup, but here goes. I have an old laptop, HP Pavilion 6000. The graphics card in there is a GeForce 7150M. The monitor connection is an old 18pin. The external monitor I use is a Samsung SyncMaster 2333. Don't ask me why, but this monitor only has a dvi-d connection (yes, i have searched it). So I have the monitor plugged into the laptop. If I use any of the Nvidia propriety drivers and try to set the resolution up to 1920x1080 (the monitor's native resolution), I get a massive overscan issue. Over the years I have tried to get this to work, tinkering with my xorg.conf to death. I have also tried this on every Ubuntu since 10.04, on all the corresponding LUbuntus, and on all the Linux Mints since Lisa. Exact same issue. I have even tried it in WinDoze and it works perfectly there (although I did get the error once, but was unable to reproduce it). Using the Open Source drivers it works perfectly iff I switch off the laptop monitor (this makes no difference with the Nvidia drivers). I would have happily gone on using the Open Source drivers, except that since upgrading to LUbuntu 12.10, the Open Source drivers make my monitor completely hazy and have the same overscan issue until I (through the haze, only because I know where things are) go to the monitor settings, activate the laptop's monitor, then deactivate it, and suddenly it comes right. I have to do this every time. So I have to find a way to fix one of them, so I may as well tackle the propriety drivers, hence this overlong question. Amidst other things, I have tried the nvidia-settings, but because it is connected to an 18pin, it detects the monitor as a vga monitor and does not give me overscan correction options. I have tried custom modlines (although there are always more of those try), I have tried using xrandr, and I have tried all the FlatPanelOptions. What I have not tried is a Gentoo build, as I don't have time any more to do that installation, but up to about three years ago when I ran Gentoo exclusively I did not have this issue. Below is an link to an image with a red highlight around the portion of the screen visible to me, the numbers around it are the number of pixels which are cut off. This does seem to drift a few pixels every now and then. Thanks in advance. Nvidia driver issue image

    Read the article

  • Problems uploading package to launchpad

    - by user74513
    I'm having a lot of problems uploading my showdown project to a PPA. I've setup correctly PGP keys and my public ssh key to launchpad. I've packaged with debuild my C++ project, producing a source package lintian gave me only those two warnings that I think are ok for the showdown rules: W: massren source: native-package-with-dash-version W: massren source: binary-nmu-debian-revision-in-source 1.0-0extras12.04.1~ppa2 Producing a binary package works to and the package installs without problem on my ubuntu 12.04 machine, I only have a few more lintian warnings about the fact I'm installing in /opt/extras.ubuntu.com/ I'm uploading with: dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes When I upload with dput I have no errors, signatures seems ok, and public key seems accepted to (since the upload goes on without asking passwords...): dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes Checking signature on .changes gpg: Signature made Mon 02 Jul 2012 10:00:38 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2_source.changes. Checking signature on .dsc gpg: Signature made Mon 02 Jul 2012 10:00:33 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2.dsc. Uploading to ppa (via ftp to ppa.launchpad.net): Uploading massren_1.0-0extras12.04.1~ppa2.dsc: done. Uploading massren_1.0-0extras12.04.1~ppa2.tar.gz: done. Uploading massren_1.0-0extras12.04.1~ppa2_source.changes: done. Successfully uploaded packages. At the moment I'm not receiving responses from launchpad site, but the upload does not show in the ppa page. Previous attempts gave me response e-mails with different kind of errors: File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 Or more cryptic: GPG verification of /srv/launchpad.net/ppa-queue/incoming/upload-ftp-20120629-163320-001135/~gabrielegreco/massren/ubuntu/massren_1.0-0extras12.04.1~ppa1.dsc failed: Verification failed 3 times: ["(7, 58, u'No data')", "(7, 58, u'No data')", "(7, 58, u'No data')"] Further error processing not possible because of a critical previous error. Any idea how can I solve this problem? I'm new to ubuntu packaging, so I may miss some step... There is an alternative to dput (aka manual upload)?

    Read the article

  • Lenovo Thinkpad X1 Carbon support

    - by Robottinosino
    I am considering selling my Mac to get money towards a Lenovo Thinkpad X1 because what I really want to do is to be running an Ubuntu system all the time. Is this machine completely supported in Ubuntu, with no tiny little feature missing just because I am "going Linux"? Optional user story section, skip to the question below if you don't have time: I have a friend who bought a "works on Ubuntu" system a year ago and has hated the fact ever since: battery lasts less than if he boots in Windows (which he despises) and he ascribes that to "no good OS/harware integration and support for advanced chipset power management features", odd behaviour on suspend/resume/hibernate (says: "when it works 90% of the time and the other 10% it makes you lose your work is as good as broken - 90% is the same as 0% he says), some occasional graphics card glitches he can perfectly well live with and has almost grown affectionate to, and finally, and that is what would make him undo his choice if he could, bad "input device drivers". He says: trackpoint and trackpad just "feel different", "so much better" on Windows and that was impossible to know from the website brochure. That story makes me very doubtful... but I want to abandon this "walled garden" of prison that is my Mac and go Ubuntu all the way, no doubt about that! My dilemma at this time is just: "I don't want to live with those eternal frustrations for sure"! Here's a directly answerable phrasing of my question: Is the Lenovo Thinkpad X1 supported on Ubuntu? Yes/no, which version? Which hardware features are not supported? Provide a list Optionally: sort the list in descending order of frustration from your experience Optionally: mention if there are acceptable workarounds to the "out-of-the-box" condition described in the earlier points and whether this ameliorates frustration at least to "tolerable" levels Comment: the Ubuntu hardware certification page is so not-for-end-users it's unreal. Whoa. What would make it end-user friendly is: Link to "buy here and you'll be just fine, this is the right configuration for you, it'll work as long as you press BUY on that page and don't browse further" Remove mentions of may and might not work. Just tell it straight: press buy here and you will get a working system with the exception of A, B, C (so that I can decide whether the philosophical "freedom pleasure" I get from escaping an Apple world is enough to off-balance the loss, for instance, of Bluetooth capabilities (something that I of course use on my Mac) but "could" lose to use free (as in freedom) software The certification page fails to dispel doubts in me as an end-user. I don't feel "eased into Ubuntu", I feel "partially informed".

    Read the article

  • How to deal with Warning : "Uncommittable transaction is detected at the end of the batch. The trans

    - by VishnuTiwariBlog
    Hi, If you are integrating with SQL Server and dealing with batch messages, you may encounter this problem. And this is evitable. The reason is the contention of resources. If your batch contains four messages and all the four messages have to be updated to SQL Server and then at the same time four process will contend for SQL server table and resources and the obvious result will be, few of your transaction will be left uncomitted and if you are not handling dehydration [not modifying the default property of the Dehydration] then your orchestration will dehydrate and will go for retry. If retry is set for every five minutes then after five minutes Port will send the message to the database. Reason for writing this post was as I did not want to see so many DEHYDRATED messages. And this was happening as Host Throttling was not set. Thus as soon as the BizTalk Process finds that SQL resources are unavailable it will go and dehydrate that process and process will go for retry. The contension of resources is unavoidable though we can fine tune the Dehydration setting. If you increase the time that an orchestration can be blocked at a subscription before being dehydrated, possibly you will give more time BizTalk Engine to handle to SQL resource availability. At least I solve the problem by fine tuning the Dehydration properties. Below is the section of config info which you need to add to the BTSNTsvc.exe.config.   <?xml version="1.0" ?> <configuration>        <configSections>               <section name="xlangs" type="Microsoft.XLANGs.BizTalk.CrossProcess.XmlSerializationConfigurationSectionHandler, Microsoft.XLANGs.BizTalk.CrossProcess" />        </configSections>        <runtime>               <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">                      <probing privatePath="BizTalk Assemblies;Developer Tools;Tracking" />               </assemblyBinding>        </runtime>        <xlangs>               <Configuration>                      <Dehydration MaxThreshold="1800" MinThreshold="1" ConstantThreshold="-1">                             <VirtualMemoryThrottlingCriteria OptimalUsage="900" MaximalUsage="1300" IsActive="true" />                             <PrivateMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="true" />                             <PhysicalMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="false" />                      </Dehydration>               </Configuration>        </xlangs> </configuration>

    Read the article

  • More on Map Testing

    - by Michael Stephenson
    I have been chatting with Maurice den Heijer recently about his codeplex project for the BizTalk Map Testing Framework (http://mtf.codeplex.com/). Some of you may remember the article I did for BizTalk 2009 and 2006 about how to test maps but with Maurice's project he is effectively looking at how to improve productivity and quality by building some useful testing features within the framework to simplify the process of testing maps. As part of our discussion we realized that we both had slightly different approaches to how we validate the output from the map. Put simple Maurice does some xpath validation of the data in various nodes where as my approach for most standard cases is to use serialization to allow you to validate the output using normal MSTest assertions. I'm not really going to go into the pro's and con's of each approach because I think there is a place for both and also I'm sure others have various approaches which work too. What would be great is for the map testing framework to provide support for different ways of testing which can cover everything from simple cases to some very specialized scenarios. So as agreed with Maurice I have done the sample which I will talk about in the rest of this article to show how we can use the serialization approach to create and compare the input and output from a map in normal development testing. Prerequisites One of the common patterns I usually implement when developing BizTalk solutions is to use xsd.exe to create .net classes for most of the schemas used within the solution. In the testing pattern I will take advantage of these .net classes. The Map In this sample the map we will use is very simple and just concatenates some data from the input message to the output message. Hopefully the below picture illustrates this well. The Test In the test I'm basically taking the following actions: Use the .net class generated from the schema to create an input message for the map Serialize the input object to a file Run the map from .net using the standard BizTalk test method which was generated for running the map Deserialize the output file from the map execution to a .net class representing the output schema Use MsTest assertions to validate things about the output message The below picture shows this: As you can see the code for this is pretty simple and it's all strongly typed which means changes to my schema which can affect the tests can be easily picked up as compilation errors. I can then chose to have one test which validates most of the output from the map, or to have many specific tests covering individual scenarios within the map. Summary Hopefully this post illustrates a powerful yet simple way of effectively testing many BizTalk mapping scenarios. I will probably have more conversations with Maurice about these approaches and perhaps some of the above will be included in the mapping test framework.   The sample can be downloaded from here: http://cid-983a58358c675769.office.live.com/self.aspx/Blog%20Samples/More%20Map%20Testing/MapTestSample.zip

    Read the article

  • wlan0 (WPA2) doesn't work when configured manually

    - by 71GA
    I have been trying to reconfigure my eth0 and wlan0 interfaces by editing /etc/network/interfaces file as folows: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.11 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 dns-nameservers 193.2.1.66 auto wlan0 iface wlan0 inet static address 192.168.1.10 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 dns-nameservers 193.2.1.66 wpa-driver wext wpa-ssid lausi wpa-ap-scan 2 wpa-proto RSN wpa-pairwise CCMP wpa-group CCMP wpa-key-mgmt WPA-PSK wpa-psk 8952a447c860d13847ba1cabd15314ba9caf2fb207f19598f90c43fcd43c0d97 But my wireless doesnt work when i use command /etc/init.d/networking restart and when i do this i get an error: * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... RTNETLINK answers: File exists Failed to bring up eth0. ioctl[SIOCSIWENCODEEXT]: Invalid argument ioctl[SIOCSIWENCODEEXT]: Invalid argument RTNETLINK answers: File exists Failed to bring up wlan0. Although it clearly states that my eth0 interface couldn't be brought to life it is working! But i cant say this for the wlan0 interface which doesn't even work if i unplug internet cable and again use command /etc/init.d/networking restart. This seems weird to me... When i use ìfconfig -a command i get an output which confirms that wlan0 isnt working and eth0 is. ziga@ziga-cq56:/etc/network$ ifconfig -a eth0 Link encap:Ethernet HWaddr 60:eb:69:6f:5f:69 inet addr:192.168.1.11 Bcast:192.168.1.13 Mask:255.255.255.0 inet6 addr: fe80::62eb:69ff:fe6f:5f69/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6764 errors:0 dropped:0 overruns:0 frame:0 TX packets:6641 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5932190 (5.9 MB) TX bytes:1331846 (1.3 MB) Interrupt:42 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1759 errors:0 dropped:0 overruns:0 frame:0 TX packets:1759 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:107772 (107.7 KB) TX bytes:107772 (107.7 KB) wlan0 Link encap:Ethernet HWaddr 70:f3:95:e7:57:cc inet addr:192.168.1.10 Bcast:192.168.1.12 Mask:255.255.255.0 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) How can i make my wlan0 interface work? It had been working previously with network manager and wicd...

    Read the article

  • Think Before You Leap - Life is Dangerous for Change Agents

    - by technodrone
    So you want to introduce agile methods to your team... The following are some "lessons learned" when from someone who advocated agile/scrum to a group that was not ready for it. "Change agents, in my experience, face negative consequences. Sometimes, most of the time at the beginning, it's painful. This is the question you might have to ask yourself. Do you want to be a developer in scrum project or do you want be a scrum master managing the process? I think with proper mentoring/training, you can become good scrum master. But is that what you want? if yes, you can go ahead, take the training. if you want to be a developer, you may not need to be certified  as scrum master. You can just pick up from a book such as Mike Cohn new book Succeeding with Agile, I am reading it now. It's good. In my experience, I did waste my resources by trying to change the culture. It cost me lot. Instead, I should have focused on technical practices that are core to agile. Then look for teams that are good at agile. I would have saved lot of energy, and time. Try baby steps first yourself in the company, and next with the team, starting with technical practices like writing unit tests, SOLID principles, patterns, refactoring, continuous integration, pairing, and peer code reviews. These have inherent pull that can bring collaboration from a team.  Once you see team adaption in core practices, then you can introduce scrum concepts like user stories/task board etc.  This idea of Leading by example seems to be working for most of the agile folks. You can pitch core practices to the manager, and the team, and start showing them how you are doing.  You can put a road map for agile adaption and you can pitch to your manager. I would include need for scrum master training as part of the road map. " I thought about his advice for a couple of weeks and read about the pitfalls of technical debt and the team not having prior awareness of agile methods. The more I read and think about it the more I think he was right.  What do you think?

    Read the article

  • Binding MediaElement to a ViewModel in a Windows 8 Store App

    - by jdanforth
    If you want to play a video from your video-library in a MediaElement control of a Metro Windows Store App and tried to bind the Url of the video file as a source to the MediaElement control like this, you may have noticed it’s not working as well for you: <MediaElement Source="{Binding Url}" /> I have no idea why it’s not working, but I managed to get it going using  ContentControl instead: <ContentControl Content="{Binding Video}" /> The code behind for this is: protected override void OnNavigatedTo(NavigationEventArgs e) {     _viewModel = new VideoViewModel("video.mp4");     DataContext = _viewModel; } And the VideoViewModel looks like this: public class VideoViewModel {     private readonly MediaElement _video;     private readonly string _filename;       public VideoViewModel(string filename)     {         _filename = filename;         _video = new MediaElement { AutoPlay = true };         //don't load the stream until the control is ready         _video.Loaded += VideoLoaded;     }       public MediaElement Video     {         get { return _video; }     }       private async void VideoLoaded(object sender, RoutedEventArgs e)     {         var file = await KnownFolders.VideosLibrary.GetFileAsync(_filename);         var stream = await file.OpenAsync(FileAccessMode.Read);         _video.SetSource(stream, file.FileType);     } } I had to wait for the MediaElement.Loaded event until I could load and set the video stream.

    Read the article

  • Defining a service layer: the text-based adventure

    - by Stacy Vicknair
    Applications these days have more options than ever for a user interface, and it’s only going to grow. A successful product might require native applications for mobile devices, a regular web implementation, or even a gaming console. These systems often will be centralized and data driven. The solution is one that’s fairly solitary, a service layer! Simply put, take what’s shared and put it behind a physical or abstract layer that defines the boundary between the specific user interface and the shared content.   I know, I know, none of this is complicated. But some times it can be difficult to discern what belongs on which side of the line. For instance, say we’re creating a service that will provide content for both an ASP.NET MVC application and a WP7 application. Although the content served to each application is the same, there are different paradigms and patterns for displaying that data in the different environments. In ASP.NET MVC, you may create a model specific to a page that combines necessary information. In the WP7 application you might require different sets of data that you will connect via MVVM with the view. The general rule of thumb is that any shared content, business rules, or data should exist separately. Any element that is specific to the current UI implementation should be included in a separate library or with the UI implementation itself. The WP7 application doesn’t need my MVC specific model classes. My MVC application doesn’t require those INotifyPropertyChanged viewmodels that the WP7 application depends on. In both cases, there should be additional processing done above the service layer to massage the data to the application’s specific needs.   Service-ocalypse: the text based adventure What helps me the most about deciding whether or not something belongs coupled to the UI implementation or in the shared implementation is thinking of the simplest implementation you could have: a console application. You might have played a game like Peasant’s Quest: The console app is the text based adventure game version of your application. If you’re service was consumed in its simplest form, you would simply have a console based API for it that issues requests. Maybe those requests aren’t SWIM TO BOAT, but they might be CREATE USER JOHN. If I issue a request, I expect that request to be issued to the service. If the service has any exceptions or issues with my input, that business logic should be encapsulated in that service, not implemented in the UI. The service layer should be your functional application in its entirety, and anything above that layer should only assist with the display of that information.

    Read the article

  • As the current draft stands, what is the most significant change the "National Strategy for Trusted Identities in Cyberspace" will provoke?

    - by mfg
    A current draft of the "National Strategy for Trusted Identities in Cyberspace" has been posted by the Department of Homeland Security. This question is not asking about privacy or constitutionality, but about how this act will impact developers' business models and development strategies. When the post was made I was reminded of Jeff's November blog post regarding an internet driver's license. Whether that is a perfect model or not, both approaches are attempting to handle a shared problem (of both developers and end users): How do we establish an online identity? The question I ask here is, with respect to the various burdens that would be imposed on developers and users, what are some of the major, foreseeable implementation issues that will arise from the current U.S. Government's proposed solution? For a quick primer on the setup, jump to page 12 for infrastructure components, here are two stand-outs: An Identity Provider (IDP) is responsible for the processes associated with enrolling a subject, and establishing and maintaining the digital identity associated with an individual or NPE. These processes include identity vetting and proofing, as well as revocation, suspension, and recovery of the digital identity. The IDP is responsible for issuing a credential, the information object or device used during a transaction to provide evidence of the subject’s identity; it may also provide linkage to authority, roles, rights, privileges, and other attributes. The credential can be stored on an identity medium, which is a device or object (physical or virtual) used for storing one or more credentials, claims, or attributes related to a subject. Identity media are widely available in many formats, such as smart cards, security chips embedded in PCs, cell phones, software based certificates, and USB devices. Selection of the appropriate credential is implementation specific and dependent on the risk tolerance of the participating entities. Here are the first considered actionable components of the draft: Action 1: Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy Action 2: Develop a Shared, Comprehensive Public/Private Sector Implementation Plan Action 3:Accelerate the Expansion of Federal Services, Pilots, and Policies that Align with the Identity Ecosystem Action 4:Work Among the Public/Private Sectors to Implement Enhanced Privacy Protections Action 5:Coordinate the Development and Refinement of Risk Models and Interoperability Standards Action 6: Address the Liability Concerns of Service Providers and Individuals Action 7: Perform Outreach and Awareness Across all Stakeholders Action 8: Continue Collaborating in International Efforts Action 9: Identify Other Means to Drive Adoption of the Identity Ecosystem across the Nation

    Read the article

  • Avoiding the Black Hole of Leads

    - by Charles Knapp
    Sales says, "Marketing doesn’t deliver enough qualified leads. So, we generate 90% of our own leads." Meanwhile, Marketing says, "We generate most of the leads. But, Sales doesn’t contact them quickly enough, while the lead is still interested." According to Sirius Decisions: Up to 90% of leads never make it to closure Sales works on only 11% of the leads supplied by Marketing Only 18% of the leads Sales accepts convert to opportunities Yet, 45% of prospects typically buy a product from someone within 12 months The root cause of these commonplace complaints is a disconnect between the funnels of marketing and sales. Unfortunately, we often see companies with an assortment of poorly integrated marketing tools. It takes too long and too many people to move the data around, scrub it, upload it from one system to another, and get it routed to the right sales teams. As a result, leads fall through the cracks, contextual information is lost, and by the time sales actually contacts a customer it may be too late. Sales automation alone is not enough. Marketing automation (including social) is not enough. Sales and Marketing must work together. It’s time to connect the silos of marketing and sales pipelines and analytics. It’s time for integrated Sales and Marketing automation. Integrated pipelines improve lead quality and timeliness. Marketing systems can track a rich set of contextual information about a prospect–self-disclosed information about interests, content viewed, and so on. This insight can equip the sales rep with rich information to make a face-to-face conversation more relevant and more likely to convert to the next stage in the sales process. Integrated lead to revenue (LTR) management provides end-to-end visibility, enabling the company to measure what is working. Marketing can measure its impact on revenue and other business outcomes, and sales can harness and redirect marketing investments to areas where they most help achieve sales objectives. It’s a win-win play. Marketing delivers more leads that are qualified, cuts cost per lead, and demonstrates a strong Return on Marketing Investment (ROMI). Sales spends more time with warm leads and less time on cold calls, achieves higher close rates, and delivers more revenue. Learn more by attending our Integrated Sales and Marketing session at the upcoming CloudWorld conferences. Or, visit our Sales and Marketing Cloud Service site for videos and other learning resources.

    Read the article

  • Turn-Based RPG Battle Instance Layout For Larger Groups

    - by SoulBeaver
    What a title, eh? I'm currently designing a videogame; a turn-based RPG like Final Fantasy (because everybody knows Final Fantasy). It's a 2D sprite game. These are my ideas for combat: -The player has a group of 15 members (main character included) -During battle, five of the group are designated as active, and appear in the battle. -These five may be switched out at leisure, or when one of the five die. -At any time, the Waiting members can cast buffs, be healed by the active members, or perform special attacks. -Battles should contain 10+ monsters at least. I'm aiming for 20, but I'm not sure if that's possible yet. -Battles should feel larger than normal due to the interaction of Waiting members, active members and the increased amount of monsters per battle. -The player has two rows in which to put the Active members: front and back. -Depending on the implementation, I might allow comboing of player attacks and skills. These are just design ideas, so beware! I have not been able to test this out yet- I have no idea yet if any of these ideas bunched together will make for a compelling game. What sounds good on paper doesn't necessarily have to be good in practice! What I'm asking now is how to create the layout for this. My starting point are the battles in Final Fantasy VI, with up to 5-6 monsters on the left and the characters on the right- monsters on both sides if it's a pincer attack. However, this view would not work feasible with my goal of 20 monsters and 5 characters. All the monsters on the left would appear cluttered unless I scale them far far back. If I create a pincer-like map, then there would be no real pincer-attack possible. If I space the monsters out I force the player to scroll the screen- a game mechanic I've come across and not enjoyed imho. My question is: does anybody have any layouts or guides for designing battle maps in turn-based RPGs, especially with a larger number of enemies taken into consideration? How should it look? I am not asking for specific combat mechanics, just the layout for the moment.

    Read the article

  • Help writing server script to ban IP's from a list

    - by Chev_603
    I have a VPS that I use as an openvpn and web server. For some reason, my apache log files are filled with thousands of these hack attempts: "POST /xmlrpc.php HTTP/1.0" 404 395 These attack attempts fill up 90% of my logs. I think it's a WordPress vulnerability they're looking for. Obviously they are not successful (I don't even have Wordpress on my server), but it's annoying and probably resource consuming as well. I am trying to write a bash script that will do the following: Search the apache logs and grab the offending IP's (even if they try it once), Sort them into a list with each unique IP on a seperate line, And then block them using the IP table rules. I am a bash newb, and so far my script does everything except Step 3. I can manually block the IP's, but that's tedious and besides, this is Linux and it's perfectly capable of doing it for me. I also want the script to be customizable so that I (or anyone else who wants to use it) can change the variables to suit whatever situation I/they may deal with in the future. Here is the script so far: #!/bin/bash ##IP LIST GENERATOR ##Author Chev Young ##Script to search Apache logs and list IP's based on custom filters ## ##Define our variables: DIRECT=~/Script ##Location of script&where to put results/temp files LOGFILE=/var/log/apache2/access.log ## Logfile to search for offenders TEMPLIST=xml_temp ## Temporary file name IP_LIST=ipstoban ## Name of results file FILTER1=xmlrpc ## What are we looking for? (Requests we want to ban) cd $DIRECT if [ ! -f $TEMPLIST ];then touch $TEMPLIST ##Create temp file fi cat $LOGFILE | grep $FILTER1 >> $DIRECT/$TEMPLIST ## Only interested in the IP's, so: sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d $DIRECT/$TEMPLIST | sort | uniq > $DIRECT/$IP_LIST rm $TEMPLIST ## Clean temp file echo "Done. Results located at $DIRECT/$IP_LIST" So I need help with the next part of the script, which should ban the IP's (incoming and perhaps outgoing too) from the resulting $IP_LIST file. I don't care if it utilizes UFW or IPTables directly, as long as it bans the IP's. I'd probably run it as a cron task. What I'm having trouble with is understanding how to use line of the result file as a seperate variable to do something like: ufw deny $IP1 $IP2 $IP3, ect Any ideas? Thanks.

    Read the article

  • Are we ready for the Cloud computing era?

    - by andrewkatumba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} "Elite?" developer circles are abuzz with the notion of Cloud computing . The increasing bandwidth, the desire for faster and leaner operations and ofcourse the need for outsourcing non core it related business requirements e.g wordprocessing, spreadsheets, data backups. In strolls Chrome OS (am sure other similar OSes will join with their own wagons for us to jump on), offering just that, internet based services(more like a repository of), quick efficient and "reliable" and for the most part cheap and often time even free! And we all go rhapsodic!  It boils down to the age old dilemma, "if the cops are so busy protecting us then who will protect them" (even the folks back at Hollywood keep us reminded)! Who is going to ensure that these internet based services do not go down(either intentionally or by some malicious third party) leading to a multinational colossal disaster .At the risk of sounding pessimistic,  IT IS NOT AN ISSUE OF TRUST, this is but a mere case of Murphy's Law!  What then? Should the "cloud" be trusted to this extent at this time?  This is an era where challenges are rapidly solved with lightning promptness to "beat the competition", my hope is that our solutions are not just creating problems that we may not be able to solve!  Keeping my ear on the Ground.

    Read the article

  • How to Update Remote Diagnostic Agent (RDA) to Latest Version?

    - by Daniel Mortimer
    Remote Diagnostic Agent (RDA) 4.28 was released on 12th June. Full details can be found in this My Oracle Support documentRDA 4 Release Notes [ID 414970.1]From a Fusion Middleware Core Component, Install and Administration perspective this latest release does not offer any significant new features or changes. However, despite the lack of Fusion Middleware specific new features in version 4.28, Remote Diagnostic Agent still comes as highly recommended. It is incredibly useful problem solving / troubleshooting aid. Support engineers dealing with Service Requests often request RDA output as it collects just about everything you might need to get a view of the state and configuration of the host operating system, network setup and Fusion Middleware components. To find out more take a look at Running RDA Against Oracle Fusion Middleware 11g [ID 853437.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] Note: While the latter document looks at RDA from the perspective of WebLogic Server, much of the advice given in the videos can be applied to other Fusion Middleware products.Ok, let's get back on track with the topic suggested by the title. If you are already familiar with Remote Diagnostic Agent you may ask the question - 'How do I keep my RDA at the latest version?' The answer is in "Running RDA Against Oracle Fusion Middleware 11g [ID 853437.1]". To quote: There are two methods: 1. Upgrade RDA via OCM (Oracle Configuration Manager) Refer to the advice given in: Remote Diagnostic Agent (RDA) Upgrade README [ID 1309034.1] OR 2. Manually download and upgrade to the latest version. To quote from Remote Diagnostic Agent (RDA) 4 - FAQ [ID 330363.1] +++++++++++++++++++++++++++++++++++++++++++++++++++++++ How do I upgrade my RDA 4.x installation from the prior release? The most simplest and reliable way to upgrade your RDA installation is delete or move your old installation to a new location. Then install the new release into the location you had the prior release installed. If you want to reuse you old setup.cfg file, you can place the older version into the new <rda> directory and it will try to upgrade your setup.cfg to the new features. A second approach is to install the latest RDA into another directory, then if needed copy the old setup.cfg file to the new RDA directory. When the new RDA is run for the first time, it will try to upgrade your setup.cfg to the new features. +++++++++++++++++++++++++++++++++++++++++++++++++++++++ The upgrade method via Oracle Configuration Manager is nice because it allows RDA to be auto updated whenever a new release of RDA is made available (which roughly speaking is every 3 months). However, it does require you to install and configure Oracle Configuration Manager in addition to RDA. A quick guide to Fusion Middleware 11g and OCM can be found in this support document.Configuring OCM in Oracle Fusion Middleware 11g? A Quick and Easy Guide [ID 1096871.1]

    Read the article

  • The partition table is corrupt

    - by Tim
    I have a corrupt the partition table on the laptop that is running Ubunutu 10.4. Before the partition table was corrupt I had the following partitions: 2 primary partitions: 1st - NTFS 2nd - Extended 4 logical partitons that are built within 2nd extended: 1st NTFS (68 Gib) 2nd Linux (19 Gib) 3rd Swap (1.4 Gib) 4th Linux (24 Gib) The physical order of these partitions was the following: ( 4th Linux ) - ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) The logical order of the partition was different: ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) ( 4th Linux ) NTFS partition was big and it resided between 2 Linux partitions, neither of these partitions had enough space to install Oracle 11g. Therefore, I decided to a) either move the NTFS partion to the left or b) remove it completely and extend partition where Linux resides. As I tool I have chosen GParted. But unfortunately it was not able to move the partition because he found that in NTFS partition there are some blocks that are referenced multiple times. Also it was not able to remove the partition neither, because in this case the partitions that follow it ( 2nd Linux ) - ( 3rd Swap ) have to be in his opinion also removed, because the organization of extended partition is a linked list. Since GParted was not able to do such thing I was trying to find another tool. I found diskdrake tool on PSLinuxOS distribution of linux. That tool silently deleted ( 1st NTFS ) partition and I thought that everything was fine. But diskdrake has damaged the partition in a way that I am not able either to boot from the hard disk nor to see the partitions with GParted and even with diskdrake itself! Fortunately I have a live CD of Ubuntu 8.10 and I am able to boot and see hard disk. I have 2 ideas how I can solve the problem: 1) Manually change disk partitions and point them to the correct partitions. 2) Create partition table with GParted that as much as possible is the same with the previous one I find the 2nd approach less time consuming but some data will be lost because of it is not possible to place borders of the partitions exactly how it was before. And moreover I am not sure if such approach would work, for example, if the OS is able to locate files after repartitioning. I feel like that it will but not 100% sure. Are there some ideas how the problem may be solved?

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • Remote Graphics Diagnostics with Windows RT 8.1 and Visual Studio 2013

    - by Michael B. McLaughlin
    Originally posted on: http://geekswithblogs.net/mikebmcl/archive/2013/11/12/remote-graphics-diagnostics-with-windows-rt-8.1-and-visual-studio.aspxThis blog post is a brief follow up to my What’s New in Graphics and Game Development in Visual Studio 2013 post on the MVP Award blog. While writing that post I was testing out various features to try to make sure everything worked as expected. I had some trouble getting Remote Graphics Diagnostics (a/k/a remote graphics debugging) working on my first generation Surface RT (upgraded to Windows RT 8.1). It was more strange since I could use remote debugging when doing CPU debugging; it was just graphics debugging that was causing trouble. After some discussions with the great folks who work on the graphics tools in Visual Studio, they were able to repro the problem and recommend a solution. My Surface RT needed the ARM Kits policy installed on it. Once I followed the instructions on the previous link, I could successfully use Remote Graphics Diagnostics on my Surface RT. Please note that this requires Windows RT 8.1 RTM (i.e. not Preview) and that Remote Graphics Diagnostics on ARM only works when you are using Visual Studio 2013 as it is a new feature (it should work just fine using the Express for Windows version). Also, when I installed the ARM Kits policy I needed to do two things to get it to work properly. First, when following the “How to install the Kits policy” instructions, I needed to copy the SecureBoot folder into Program Files on my Surface RT (specifically, I copied the SecureBoot folder to “C:\Program Files\Windows Kits\8.1\bin\arm\” on my Surface RT, creating any necessary directories). It may work if it’s in any system folder; I didn’t test any others after I got it working. I had initially put it in my Downloads folder and tried installing it from there. When the machine restarted it displayed a worrisome error message. I repeatedly pressed the button that would allow me to retry and eventually the machine rebooted and managed to recover itself to its previous state. Second, I needed to install it as an Administrator. The instructions say that this might be necessary. For me it was. This is a Remote Graphics Diagnostics is a great new feature in Visual Studio 2013 so I definitely encourage all of you to check it out!

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data.?

    - by Milanix
    This is not really a coding question since, I am not adding any code in here. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. Atleast I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance? It would really be a great help if anyone can give me better overview about it. Or may be counter argument against each. Many thanks!

    Read the article

  • Analysing SQLBits Feedback

    - by jamiet
    Earlier this week I received all the feedback that people offered on my session at SQLBits 7 in York – “SSIS Dataflow Performance Tuning” (the video is available online if you wish to see it). As you may have gathered from previous posts on this blog and my less-SQLy-focused Wordpress blog I am a big fan of collecting and tracking both personal and public data and session feedback lends itself very well to tracking because it is quantitative rather than qualitative; by that I mean attendees are invited to provide marks out of ten rather than (or, in the case of SQLBits, as well as) written comments. The SQLBits feedback is also useful because they use a consistent format – the same questions are asked each time – this means it is particularly easy to to track whether the scores that people give are trending up or down. I suspect that somewhere the SQLBits organisers have a big Analysis Services cube (ok, perhaps its an Excel pivot table) that allows them to analyse these scores per conference, speaker, track etc.… and there’s no reason that we as session speakers cannot do the same thing. To that end I have started to store my feedback in an Excel spreadsheet of my own which in the interests of transparency is available for public viewing (only a web browser required) on SkyDrive at http://cid-550f681dad532637.office.live.com/view.aspx/Public/Misc/Personal%20SQLBits%20Session%20Feedback.xlsx. I have used a pivot table to aggregate all that feedback and here is a screenshot: I am hereby making a public plea to the SQLBits organisers (on the off-chance that they are reading) to please continue to keep the feedback format consistent in the future and I encourage them to publish all of the feedback in an anonymised form. I would also encourage anyone doing conference speaking to track their conference feedback in the same way that I am doing so that you get an insight into whether or not you are improving over time. It is not difficult to setup and maintaining it as you do more sessions takes very little effort. Storing feedback data like this leads me to wider thoughts about well-known conventions and data format standardisation. Let’s imagine a utopia where there were a standard set of questions for capturing session feedback that were leveraged at every conference regardless of subject matter, location or culture; that would give rise to immense cross-conference and cross-discipline analysis – the data analyst in me goes giddy at the thought of it. It is scenarios like this that drive my interest both in data formats such as iCalendar, microformats and RDF, and in emerging movements such as the semantic web and linked data, all things which I have written about in the past. I don’t know whether we will ever reach the stage where every piece of data has structured, descriptive metadata associated with it but I live in hope. @Jamiet

    Read the article

  • Desparately Need Help: After a mishap, a folder shows 0 files in it

    - by bobby
    I'm hoping some of you guys may be able to shed some light on this scenario: I had a odt document on which I was working from one of many files in a folder among many on an internal hard-drive. Some kind of glitch occured and the document crashed (this could have been some kind of power charge whilst another hard drive was being unmounted). As I looked into the folders surrounding the folder in which my odt document was stored, they start to show 0 files in them. I immediately switched off the PC and then re-started. Upon the re-start, the folders would show the 1,000s of files I've stored in them and then within 5 minutes, as I started to back them up, freeze, cut-off the process of transfer. When I tried to open anything on the internal hard-drive, be it an avi film, an mp3, a cbr or a word doc, they all showed blank or would work. Some folders had vastly less files showing. Eventually, things calmed down. I closed the PC, checked that the connections were in firmly, gave it a vacuum and restarted the PC. All the files eventually showed up and I started to back them up (which I'd brought a hard drive for anyway but been distracted and not done). All folders show except the one which contained the document I was working on at the time of the trouble. Strangely, it was one that should itself full on several occasions on restarts. It shows zero files now. Properties shows zero files and zero space taken by it. Yet when I drop a file into this folder by pasting it in, it disappears too. Opening the folder, there is nothing there. But if I paste that document again, the PC asks would I like to replace the existing file with the same name (that I can't see), when I click yes, the file appears. When I exit, the folder shows the 0 files in the folder. Going back into the folder, it has disappeared again. I'm hoping that someone can help give me tips to recover the files in the folder, it would be greatly, greatly appreciated. All other films, music, comics, documents show and are fine!

    Read the article

< Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >