Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 774/1022 | < Previous Page | 770 771 772 773 774 775 776 777 778 779 780 781  | Next Page >

  • Perl syntax error [closed]

    - by Linny
    I am a beginner taking a Perl programming course. We are trying to write a basic program for counting nucleotides in a DNA string. I'm getting syntax errors on the lines that have a single bracket on lines 28 & 70 and don't know why. It also reads that I have compilation errors. I have no idea where to start figuring that out. # The purpose of this program is to count the number of nucleotides in a strand. Each protein is counted separately # print "/n NOTE: Nucleotide counting /n"; # use strict; # enforce variable declarations use warnings; # enable compiler warnings # Display number of A,a,T,t,G,g,C,c, nucleotides in a word or sequence of letters. # my ($base) = ''; # an extracted letter from a string my ($nuceotide_count) = 0 ; # the current position within the word my ($position) = 0 ; # number of vowels in user-supplied word my ($word) = ''; # word to be processed my ($A_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($a_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($C_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($c_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($G_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($g_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($T_count) = 0 ; # of T nucleotides in the user-supplied sequence my ($t_count) = 0 ; # of T nucleotides in the user-supplied sequence word = (STDIN) for ($position = 0);($position if (($base eq 'a') or ($base eq 'A')) { ++$A_count; } # end if ++$position; if (($base eq 'T') or ($base eq 't')) { ++$T_count; } end if ++$position; if (($base eq 'G') or ($base eq 'g')) { ++$G_count; } # end if ++$position; if (($base eq 'C') or ($base eq 'c')) { ++$C_count; } # end if ++$position; } # end for # Display final results. # print " \n The number of A or a neucleotides is: $A_count"; print " \n The number of T or t neucleotides is: $T_count"; print " \n The number of G or g neucleotides is: $G_count"; print " \n The number of C or c neucleotides is: $C_count"; print " \n\n Program completed successfully. \n" ; exit ;

    Read the article

  • How do I stop icons appearing on the desktop under conky?

    - by Seamus
    When I download something to my desktop, or insert a CD or flash drive, the icon appears on my desktop. When I have conky running, the icon sometimes appears in the top right corner, underneath conky; where I can't see it. How do I stop this happening? My .conkyrc is pasted below. I didn't write it all myself, so I'm not entirely sure what I need to change, or what parts are relevant for this particular question... # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type override own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont DejaVu Sans:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left alignment top_right #alignment bottom_left #alignment bottom_right # Gap between borders of screen and text gap_x 10 gap_y 20 # stuff after 'TEXT' will be formatted on screen TEXT $color ${color orange}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine ${color orange}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${acpitemp} $cpubar ${cpugraph 000000 ffffff} NAME ${goto 150}PID ${goto 200}CPU% ${goto 250}MEM% ${top name 1} ${goto 150}${top pid 1} ${goto 200}${top cpu 1} ${goto 250}${top mem 1} ${top name 2} ${goto 150}${top pid 2} ${goto 200}${top cpu 2} ${goto 250}${top mem 2} ${top name 3} ${goto 150}${top pid 3} ${goto 200}${top cpu 3} ${goto 250}${top mem 3} ${top name 4} ${goto 150}${top pid 4} ${goto 200}${top cpu 4} ${goto 250}${top mem 4} ${color orange}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Home: ${fs_free_perc /home}% ${fs_bar 6 /}$color Free Space: ${fs_free /home} ${color orange}NETWORK (${addr eth0}) ${hr 2}$color Down: $color${downspeed eth0} k/s ${alignr}Up: ${upspeed eth0} k/s ${downspeedgraph eth0 25,140 000000 ff0000} ${alignr}${upspeedgraph eth0 25,140 000000 00ff00}$color Total: ${totaldown eth0} ${alignr}Total: ${totalup eth0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color orange}WIRELESS (${addr wlan0}) ${hr 2}$color Down: $color${downspeed wlan0} k/s ${alignr}Up: ${upspeed wlan0} k/s ${downspeedgraph wlan0 25,140 000000 ff0000} ${alignr}${upspeedgraph wlan0 25,140 000000 00ff00}$color Total: ${totaldown wlan0} ${alignr}Total: ${totalup wlan0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr}

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • Best Creational Pattern for loggers in a multi-threaded system?

    - by Dipan Mehta
    This is a follow up question on my past questions : Concurrency pattern of logger in multithreaded application As suggested by others, I am putting this question separately. As the learning from the last question. In a multi-threaded environment, the logger should be made thread safe and probably asynchronous (where in messages are queued while a background thread does writing releasing the requesting object thread). The logger could be signleton or it can be a per-group logger which is a generalization of the above. Now, the question that arise is how does logger should be assigned to the object? There are two options I can think of: 1. Object requesting for the logger: Should each of the object call some global API such as get_logger()? Such an API returns "the" singleton or the group logger. However, I feel this involves assumption about the Application environment to implement the logger -which I think is some kind of coupling. If the same object needs to be used by other application - this new application also need to implement such a method. 2. Assign logger through some known API The other alternative approach is to create a kind of virtual class which is implemented by application based on App's own structure and assign the object sometime in the constructor. This is more generalized method. Unfortunately, when there are so many objects - and rather a tree of objects passing on the logger objects to each level is quite messy. My question is there a better way to do this? If you need to pick any one of the above, which approach is would you pick and why? Other questions remain open about how to configure them: How do objects' names or ID are assigned so that will be used for printing on the log messages (as the module names) How do these objects find the appropriate properties (such as log levels, and other such parameters) In the first approach, the central API needs to deal with all this varieties. In the second approach - there needs to be additional work. Hence, I want to understand from the real experience of people, as to how to write logger effectively in such an environment.

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • Monitoring C++ applications

    - by Scott A
    We're implementing a new centralized monitoring solution (Zenoss). Incorporating servers, networking, and Java programs is straightforward with SNMP and JMX. The question, however, is what are the best practices for monitoring and managing custom C++ applications in large, heterogenous (Solaris x86, RHEL Linux, Windows) environments? Possibilities I see are: Net SNMP Advantages single, central daemon on each server well-known standard easy integration into monitoring solutions we run Net SNMP daemons on our servers already Disadvantages: complex implementation (MIBs, Net SNMP library) new technology to introduce for the C++ developers rsyslog Advantages single, central daemon on each server well-known standard unknown integration into monitoring solutions (I know they can do alerts based on text, but how well would it work for sending telemetry like memory usage, queue depths, thread capacity, etc) simple implementation Disadvantages: possible integration issues somewhat new technology for C++ developers possible porting issues if we switch monitoring vendors probably involves coming up with an ad-hoc communication protocol (or using RFC5424 structured data; I don't know if Zenoss supports that without custom Zenpack coding) Embedded JMX (embed a JVM and use JNI) Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions somewhat simple implementation (we already do this today for other purposes) Disadvantages: complexity (JNI, thunking layer between native C++ and Java, basically writing the management code twice) possible stability problems requires a JVM in each process, using considerably more memory JMX is new technology for C++ developers each process has it's own JMX port (we run a lot of processes on each machine) Local JMX daemon, processes connect to it Advantages single, central daemon on each server consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions Disadvantages: complexity (basically writing the management code twice) need to find or write such a daemon need a protocol between the JMX daemon and the C++ process JMX is new technology for C++ developers CodeMesh JunC++ion Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions single, central daemon on each server when run in shared JVM mode somewhat simple implementation (requires code generation) Disadvantages: complexity (code generation, requires a GUI and several rounds of tweaking to produce the proxied code) possible JNI stability problems requires a JVM in each process, using considerably more memory (in embedded mode) Does not support Solaris x86 (deal breaker) Even if it did support Solaris x86, there are possible compiler compatibility issues (we use an odd combination of STLPort and Forte on Solaris each process has it's own JMX port when run in embedded mode (we run a lot of processes on each machine) possibly precludes a shared JMX server for non-C++ processes (?) Is there some reasonably standardized, simple solution I'm missing? Given no other reasonable solutions, which of these solutions is typically used for custom C++ programs? My gut feel is that Net SNMP is how people do this, but I'd like other's input and experience before I make a decision.

    Read the article

  • Editing /.config/dconf/user

    - by user86322
    I am having a problem with Gnome3 (actually, I have it set to fallback mode, or Gnome 2). I have two displays and I need an X screen (I used nvidia-xconfig and nvidia-settings to do this) for each screen. However, every time I either restart X or log in, Gnome seems to be adding the objects values under /gnome/gnome-panel/layouts (ex. first time I set the two separate X screens I had clock, then log out/in, there was clock and clock1 under objects, and then log out/in there were three, clock, clock1, clock2,.......log out/in, ............30 times....clock, clock1, clock2, ......clock 42.....!! The same thing goes for top-panels, menu-bars, etc.) After a while, I found out I could remove all those using the dconf-editor, going to /gnome/gnome-panel/layouts, removing all the repetitions under fields objects-id-list and top-id-list and leaving one value of each object. This is not a solution but at least allow me to keep using Linux without so much problem. However, the problem persists every time I restart X or log in. I now finally learned about "dconf" and where the user profile settings are located (~/.config/dconf/user) and one can use "dconf" to see the keys. In my case, I need to change/remove many keys (all those clocksX, workspace-X, menu-bar-X, etc., where goes from 1 to 42 and still counting) so it's really tedious and boring to be changing one by one using "dconf write". So I found "dconf dump", which actually allow me to dump everything into a .txt file and edit the file really quick (i.e, "dconf dump / >> dump_user.txt"). The problems? Two of them: How do I "load" back "dump_user.txt" I edited into the user profile? (I read somewhere there was a "dconf reload" but reload doesn't exist as a command under "dconf") How do I stop Gnome from keep adding more objects to my desktop environment every time I log in/restart X? NOTE: The problem doesn't occur when I set the displays to use TwinView feature (i.e., the desktop is extended/shared by both displays). However, for my case I need two separate X's. Any help/suggestion would be greatly appreciated. Thanks

    Read the article

  • Cloud Computing: Start with the problem

    - by BuckWoody
    At one point in my life I would build my own computing system for home use. I wanted a particular video card, a certain set of drives, and a lot of memory. Not only could I not find those things in a vendor’s pre-built computer, but those were more expensive – by a lot. As time moved on and the computing industry matured, I actually find that I can buy a vendor’s system as cheaply – and in some cases far more cheaply – than I can build it myself.   This paradigm holds true for almost any product, even clothing and furniture. And it’s also held true for software… Mostly. If you need an office productivity package, you simply buy one or use open-sourced software for that. There’s really no need to write your own Word Processor – it’s kind of been done a thousand times over. Even if you need a full system for customer relationship management or other needs, you simply buy one. But there is no “cloud solution in a box”.  Sure, if you’re after “Software as a Service” – type solutions, like being able to process video (Windows Azure Media Services) or running a Pig or Hive job in Hadoop (Hadoop on Windows Azure) you can simply use one of those, or if you just want to deploy a Virtual Machine (Windows Azure Virtual Machines) you can get that, but if you’re looking for a solution to a problem your organization has, you may need to mix Software, Infrastructure, and perhaps even Platforms (such as Windows Azure Computing) to solve the issue. It’s all about starting from the problem-end first. We’ve become so accustomed to looking for a box of software that will solve the problem, that we often start with the solution and try to fit it to the problem, rather than the other way around.  When I talk with my fellow architects at other companies, one of the hardest things to get them to do is to ignore the technology for a moment and describe what the issues are. It’s interesting to monitor the conversation and watch how many times we deviate from the problem into the solution. So, in your work today, try a little experiment: watch how many times you go after a problem by starting with the solution. Tomorrow, make a conscious effort to reverse that. You might be surprised at the results.

    Read the article

  • Throwing exception from a property when my object state is invalid

    - by Rumi P.
    Microsoft guidelines say: "Avoid throwing exceptions from property getters", and I normally follow that. But my application uses Linq2SQL, and there is the case where my object can be in invalid state because somebody or something wrote nonsense into the database. Consider this toy example: [Table(Name="Rectangle")] public class Rectangle { [Column(Name="ID", IsPrimaryKey = true, IsDbGenerated = true)] public int ID {get; set;} [Column(Name="firstSide")] public double firstSide {get; set;} [Column(Name="secondSide")] public double secondSide {get; set;} public double sideRatio { get { return firstSide/secondSide; } } } Here, I could write code which ensures that my application never writes a Rectangle with a zero-length side into the database. But no matter how bulletproof I make my own code, somebody could open the database with a different application and create an invalid Rectangle, especially one with a 0 for secondSide. (For this example, please forget that it is possible to design the database in a way such that writing a side length of zero into the rectangle table is impossible; my domain model is very complex and there are constraints on model state which cannot be expressed in a relational database). So, the solution I am gravitating to is to change the getter to: get { if(firstSide > 0 && secondSide > 0) return firstSide/secondSide; else throw new System.InvalidOperationException("All rectangle sides should have a positive length"); } The reasoning behind not throwing exceptions from properties is that programmers should be able to use them without having to make precautions about catching and handling them them. But in this case, I think that it is OK to continue to use this property without such precautions: if the exception is thrown because my application wrote a non-zero rectangle side into the database, then this is a serious bug. It cannot and shouldn't be handled in the application, but there should be code which prevents it. It is good that the exception is visibly thrown, because that way the bug is caught. if the exception is thrown because a different application changed the data in the database, then handling it is outside of the scope of my application. So I can't do anything about it if I catch it. Is this a good enough reasoning to get over the "avoid" part of the guideline and throw the exception? Or should I turn it into a method after all? Note that in the real code, the properties which can have an invalid state feel less like the result of a calculation, so they are "natural" properties, not methods.

    Read the article

  • Am I deluding myself? Business analyst transition to programmer

    - by Ryan
    Current job: Working as the lead business analyst for a Big 4 firm, leading a team of developers and testers working on a large scale re-platforming project (4 onshore dev, 4 offshore devs, several onshore/offshore testers). Also work in a similar capacity on other smaller scale projects. Extent of my role: Gathering/writing out requirements, creating functional specifications, designing the UI (basically mapping out all front-end aspects of the system), working closely with devs to communicate/clarify requirements and come up with solutions when we hit roadblocks, writing test cases (and doing much of the testing), working with senior management and key stakeholders, managing beta testers, creating user guides and leading training sessions, providing key technical support. I also write quite a few macros in Excel using VBA (several of my macros are now used across the entire firm, so there are maybe around 1000 people using them) and use SQL on a daily basis, both on the SQL compact files the program relies on, our SQL Server data and any Access databases I create. The developers feel that I am quite good in this role because I understand a lot about programming, inherent system limitations, structure of the databases, etc so it's easier for me to communicate ideas and come up with suggestions when we face problems. What really interests me is developing software. I do a fair amount of programming in VBA and have been wanting to learn C# for awhile (the dev team uses C# - I review code occasionally for my own sake but have not had any practical experience using it). I'm interested in not just the business process but also the technical side of things, so the traditional BA role doesn't really whet my appetite for the kind of stuff I want to do. Right now I have a few small projects that managers have given me and I'm finding new ways to do them (like building custom Access applications), so there's a bit here and there to keep me interested. My question is this: what I would like to do is create custom Excel or Access applications for small businesses as a freelance business (working as a one-man shop; maybe having an occasional contractor depending on a project's complexity). This would obviously start out as a part-time venture while I have a day job, but eventually become a full-time job. Am I deluding myself to thinking I can go from BA/part-time VBA programmer to making a full-time go of a freelance business (where I would be starting out just writing custom Excel/Access apps in VBA)? Or is this type of thing not usually attempted until someone gains years of full-time programming experience? And is there even a market for these types of applications amongst small businesses (and maybe medium-sized) businesses?

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • HDMI sound gone, can't figure out how to turn it back on

    - by Oli
    I have had an Acer Revo box as a media centre for a while. I recently installed Ubuntu Server (10.10) on it and polished it up with nodm (one of the most simple ways to launch an X session) and installed boxee. It's been working fine for over a month. It's just running ALSA. I've had problems with PulseAudio/Boxee/HDMI before so I wanted to keep it simple. And that worked. It pushed both PCM and digital (AAC and various Dolby codecs) over HDMI perfectly. But I restarted it the other day after mucking around with some nfs configuration and now there isn't any sound. The hardware is an ION chipset. Nvidia 9400M graphics with Nvidia MCP79/7A audio. One thing I have noticed is there doesn't appear to be any sign of a IEC958 device. A traditional fix in the past for fresh installs has been to load alsamixer, find the IEC device and toggle its mute but I can't. I'm certain this used to represent the HDMI output. It just doesn't seem to exist any more unless I run sudo alsa-utils restart while boxee is running, when I see it in an error message: * Shutting down ALSA... [ OK ] * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1388: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... ...done. When nodm (and thus boxee) aren't running, I don't see this error but alsamixer still doesn't show the IEC channel. aplay -l gives: card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 0/1 Subdevice #0: subdevice #0 Its section in lshw reads: *-multimedia description: Audio device product: MCP79 High Definition Audio vendor: nVidia Corporation physical id: 8 bus info: pci@0000:00:08.0 version: b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=0 maxlatency=5 mingnt=2 resources: irq:22 memory:fae78000-fae7bfff I was running on the stock PAE kernel but now it's running on 2.6.37.1. I upgraded to see if that fixed things; it didn't. I'm considering a reinstall but I hate doing that because a) there's a bit of custom configuration in getting X and Boxee to start on boot and b) I don't know what the problem is. If I reinstall this time, I'll end up doing that every time the sound breaks. I love Ubuntu but I don't want to install it once a month. Is there any way to forcibly reset all alsa settings and restart from scratch (without doing a reinstall)? Any other tips? If you need more information, just ask.

    Read the article

  • my probleme about "The installation or removal of a software package failed"

    - by tulipelle
    Recently, when I open Ubuntu software center, it ask me repair package Then I found this message . installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 569135 files and directories currently installed.) Unpacking linux-image-3.5.0-42-generic (from .../linux-image-3.5.0-42-generic_3.5.0-42.65~precise1_amd64.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65~precise1_amd64.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./boot/vmlinuz-3.5.0-42-generic': No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65~precise1_amd64.deb Error in function: dpkg: dependency problems prevent configuration of linux-image-generic-lts-quantal: linux-image-generic-lts-quantal depends on linux-image-3.5.0-42-generic; however: Package linux-image-3.5.0-42-generic is not installed. dpkg: error processing linux-image-generic-lts-quantal (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic-lts-quantal: linux-generic-lts-quantal depends on linux-image-generic-lts-quantal; however: Package linux-image-generic-lts-quantal is not configured yet. dpkg: error processing linux-generic-lts-quantal (--configure): dependency problems - leaving unconfigured

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • Loading levels from .txt or .XML for XNA

    - by Dave Voyles
    I'm attemptin to add multiple levels to my pong game. I'd like to simply exchange a few elements with each level, nothing crazy. Just the background texture, the color of the AI paddle (the one on the right side), and the music. It seems that the best way to go about this is by utilizing the StreamReader to read and write the files from XML. If there is a better, or more efficient alternative way then I'm all for it. In looking over the XNA Starter Platformer Kit provided by MS it seems that they've done it in this manner as well. I'm perplexed by a few things, however, namely parts within the Level class which aren't commented. /// <summary> /// Iterates over every tile in the structure file and loads its /// appearance and behavior. This method also validates that the /// file is well-formed with a player start point, exit, etc. /// </summary> /// <param name="fileStream"> /// A stream containing the tile data. /// </param> private void LoadTiles(Stream fileStream) { // Load the level and ensure all of the lines are the same length. int width; List<string> lines = new List<string>(); using (StreamReader reader = new StreamReader(fileStream)) { string line = reader.ReadLine(); width = line.Length; while (line != null) { lines.Add(line); if (line.Length != width) throw new Exception(String.Format("The length of line {0} is different from all preceeding lines.", lines.Count)); line = reader.ReadLine(); } } What does width = line.Length mean exactly? I mean I know how it reads the line, but what difference does it make if one line is longer than any of the others? Finally, their levels are simply text files that look like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### It can't be that easy..... Can it?

    Read the article

  • Updating an Entity through a Service

    - by GeorgeK
    I'm separating my software into three main layers (maybe tiers would be a better term): Presentation ('Views') Business logic ('Services' and 'Repositories') Data access ('Entities' (e.g. ActiveRecords)) What do I have now? In Presentation, I use read-only access to Entities, returned from Repositories or Services, to display data. $banks = $banksRegistryService->getBanksRepository()->getBanksByCity( $city ); $banksViewModel = new PaginatedList( $banks ); // some way to display banks; // example, not real code I find this approach quite efficient in terms of performance and code maintanability and still safe as long as all write operations (create, update, delete) are preformed through a Service: namespace Service\BankRegistry; use Service\AbstractDatabaseService; use Service\IBankRegistryService; use Model\BankRegistry\Bank; class Service extends AbstractDatabaseService implements IBankRegistryService { /** * Registers a new Bank * * @param string $name Bank's name * @param string $bik Bank's Identification Code * @param string $correspondent_account Bank's correspondent account * * @return Bank */ public function registerBank( $name, $bik, $correspondent_account ) { $bank = new Bank(); $bank -> setName( $name ) -> setBik( $bik ) -> setCorrespondentAccount( $correspondent_account ); if( null === $this->getBanksRepository()->getDefaultBank() ) $this->setDefaultBank( $bank ); $this->getEntityManager()->persist( $bank ); return $bank; } /** * Makes the $bank system's default bank * * @param Bank $bank * @return IBankRegistryService */ public function setDefaultBank( Bank $bank ) { $default_bank = $this->getBanksRepository()->getDefaultBank(); if( null !== $default_bank ) $default_bank->setDefault( false ); $bank->setDefault( true ); return $this; } } Where am I stuck? I'm struggling about how to update certain fields in Bank Entity. Bad solution #1: Making a series of setters in Service for each setter in Bank; - seems to be quite reduntant, increases Service interface complexity and proportionally decreases it's simplicity - something to avoid if you care about code maitainability. I try to follow KISS and DRY principles. Bad solution #2: Modifying Bank directly through it's native setters; - really bad. If you'll ever need to move modification into the Service, it will be pain. Business logic should remain in Business logic layer. Plus, there are plans on logging all of the actions and maybe even involve user permissions (perhaps, through decorators) in future, so all modifications should be made only through the Service. Possible good solution: Creating an updateBank( Bank $bank, $array_of_fields_to_update) method; - makes the interface as simple as possible, but there is a problem: one should not try to manually set isDefault flag on a Bank, this operation should be performed through setDefaultBank method. It gets even worse when you have relations that you don't want to be directly modified. Of course, you can just limit the fields that can be modified by this method, but how do you tell method's user what they can and cannot modify? Exceptions?

    Read the article

  • Mobile Apps: An Ongoing Revolution

    - by Steve Walker
    a guest post from Suhas Uliyar, VP Mobile Strategy, Product Management, Oracle The rise of smartphone apps have proved transformational for businesses, increasing the productivity of employees while simultaneously creating some seriously cool end user experiences. But this is a revolution that is only just beginning. Over the next few years, apps will change everything about the way enterprises work as well as overhauling the experiences of customers. The spark for this revolution is simplicity. Simplicity has already proved important for the front-end of apps, which are now often as compelling and intuitive as consumer apps. Businesses will encourage this trend, both to further increase employee productivity and to attract ‘digital natives’ (as employees and customers). With the variety of front-end development tools available already, this should be a simple mission for developers to accomplish – but front-end simplicity alone is not enough for the enterprise mobile revolution. Without the right content even the most user-friendly app is useless. Yet when it comes to integrating apps with ‘back-end’ systems to enable this content, developers often face a complex, costly and time-consuming task. Then there is security: how can developers strike a balance between complying with enterprise security policies and keeping the user experience simple? Complexity has acted as a brake on innovation, with integration and security compliance swallowing enterprise resources. This is why the simplification of integration, security and scalability is so important: it frees time and money for revolutionary innovation. The key is to put in place a complete and unified SOA integration platform that runs across the entire enterprise and enables organizations to easily integrate and connect applications across IT environments. The platform must also be capable of abstracting apps from the underlying OS and enabling a ‘write-once, run- anywhere’ capability for mobile devices - essential for BYOD environments and integrating third-party apps. Mobile Back-end-as-a-Service can also be very important in streamlining back-end integration. Mobile services offered through the cloud can simplify mobile application development with a standard approach to dealing with complex server-side programming and integration issues. This allows the business to innovate at its own pace while providing developers with a choice of tools to speed development and integration. Finally, there is security, which must be done in a way that encourages users to make the most of their mobile devices and applications. As mobile users, we want convenience and that is why we generally approve of businesses that adopt BYOD policies. Enterprises can safely encourage BYOD as they can separate, protect, and wipe corporate applications by installing a secure ‘container’ around corporate applications on any mobile device. BYOD management also means users’ personal applications and data can be kept separate from the enterprise information – giving them the confidence they need to embrace the use of their devices for corporate apps. Enterprises that place mobility at the heart of what they do will fundamentally transform their businesses and leap ahead of the competition. As businesses take to mobile platforms that simplify integration, security and scalability we will see a blossoming of innovation that will drive new levels of user convenience and create new ways of working that we are only beginning to imagine.

    Read the article

  • Enabling EUS support in OUD 11gR2 using command line interface

    - by Sylvain Duloutre
    Enterprise User Security (EUS) allows Oracle Database to use users & roles stored in LDAP for authentication and authorization.Since the 11gR2 release, OUD natively supports EUS. EUS can be easily configured during OUD setup. ODSM (the graphical admin console) can also be used to enable EUS for a new suffix. However, enabling EUS for a new suffix using command line interface is currently not documented, so here is the procedure: Let's assume that EUS support was enabled during initial setup.Let's o=example be the new suffix I want to use to store Enterprise users. The following sequence of command must be applied for each new suffix: // Create a local database holding EUS context infodsconfig create-workflow-element --set base-dn:cn=OracleContext,o=example --set enabled:true --type db-local-backend --element-name exampleContext -n // Add a workflow element in the call path to generate on the fly attributes required by EUSdsconfig create-workflow-element --set enabled:true --type eus-context --element-name eusContext --set next-workflow-element:exampleContext -n // Add the context to a workflow for routingdsconfig create-workflow --set base-dn:cn=OracleContext,o=example --set enabled:true --set workflow-element:eusContext --workflow-name exampleContext_workflow -n //Add the new workflow to the appropriate network groupdsconfig set-network-group-prop --group-name network-group --add workflow:exampleContext_workflow -n // Create the local database for o=exampledsconfig create-workflow-element --set base-dn:o=example --set enabled:true --type db-local-backend --element-name example -n // Create a workflow element in the call path to the user data to generate on the fly attributes expected by EUS dsconfig create-workflow-element --set enabled:true --set eus-realm:o=example --set next-workflow-element:example --type eus --element-name eusWfe// Add the db to a workflow for routingdsconfig create-workflow --set base-dn:o=example --set enabled:true --set workflow-element:eusWfe --workflow-name example_workflow -n //Add the new workflow to the appropriate network groupdsconfig set-network-group-prop --group-name network-group --add workflow:example_workflow -n  // Add the appropriate acis for EUSdsconfig set-access-control-handler-prop \           --add global-aci:'(target="ldap:///o=example")(targetattr="authpassword")(version 3.0; acl "EUS reads authpassword"; allow (read,search,compare) userdn="ldap:///??sub?(&(objectclass=orclservice)(objectclass=orcldbserver))";)' dsconfig set-access-control-handler-prop \       --add global-aci:'(target="ldap:///o=example")(targetattr="orclaccountstatusevent")(version 3.0; acl "EUS writes orclaccountstatusenabled"; allow (write) userdn="ldap:///??sub?(&(objectclass=orclservice)(objectclass=orcldbserver))";)' Last but not least you must adapt the content of the ${OUD}/config/EUS/eusData.ldif  file with your suffix value then inport it into OUD.

    Read the article

  • What if I can't make my unit test fail in "Red, Green, Refactor" of TDD?

    - by Joshua Harris
    So let's say that I have a test: @Test public void MoveY_MoveZero_DoesNotMove() { Point p = new Point(50.0, 50.0); p.MoveY(0.0); Assert.assertAreEqual(50.0, p.Y); } This test then causes me to create the class Point: public class Point { double X; double Y; public void MoveY(double yDisplace) { throw new NotYetImplementedException(); } } Ok. It fails. Good. Then I remove the exception and I get green. Great, but of course I need to test if it changes value. So I write a test that calls p.MoveY(10.0) and checks if p.Y is equal to 60.0. It fails, so then I change the function to look like so: public void MoveY(double yDisplace) { Y += yDisplace; } Great, now I have green again and I can move on. I've tested not moving and moving in the positive direction, so naturally I should test a negative value. The only problem with this test is that if I wrote the test correctly, then it doesn't fail at first. That means that I didn't fit the principle of "Red, Green, Refactor." Of course, This is a first-world problem of TDD, but getting a fail at first is helpful in that it shows that your test can fail. Otherwise this seemingly innocent test that is just passing for incorrect reasons could fail later because it was written wrong. That might not be a problem if it happened 5 minutes later, but what if it happens to the poor-sap that inheirited your code two years later. What he knows is that MoveY does not work with negative values because that is what the test is telling him. But, it really could work and just be a bug in the test. I don't think that would happen in this particular case because the code sample is so simple, but if it were a large complicated system that might not be the case. It seems crazy to say that I want to fail my tests, but that is an important step in TDD, for good reasons.

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • Career Development: What should I learn next after Python? and Why? [closed]

    - by Josh
    Hi all I'm currently learning Python. I want to know what should I learn next out of these programming langauages: PHP Actionscript 3 Objective-C (iPhone applications) I work in the Multimedia industry and have decided to learn Python as a first programming language seriously because I would like to learn the basics of programming, to mainly write scripts at work that Automate task (eg. Edit multiple XML files quickly) At work we have a senior developer who knows Actionscript and PHP very well (although knows PHP better). We also have been developing iPhone applications for 2 weeks, Our senior developer could learn it although we have lots of work currently with PHP and Actionscript 3 type work and haven't had time or reason to pick up iOS development. Here are the reasons I want to learn each language, But I cannot decide what I'll learn next: PHP: I want to learn PHP because it will help with Web Development. PHP is very wanted by employers. Senior developer at work writes everything in it web sites, CMS etc. (including XML checks and scripts), I will learn a lot from him (once I learn the basics). However, I don't want to learn Web because you have to deal with lots of cross-browser problems. Actionscript 3: At work we are looking to put on another developer to help with online activities and very small games (using Actionscript 3.0 and Flash CS5) for (eg. First Aid Activities etc) I would like to do things that have a element of design as I'm better at Photoshop then developing. I want to be creative, I like to interact with users in a fun way. Objective-C (iPhone applications): We are a all mac office, we may get more iPhone, iPad application work(jobs) that need to be created. Work has found it nearly impossible to find good iPhone developers. I like apple products (Macs and iPhones), I would like to make my own games, applications in my spare time(if I knew how). Should I learn Actionscript first because it would be easier to learn then Objective-C? Should I learn PHP because it is very widely used? Should I learn Objective-C because it is really wanted by employers now?

    Read the article

  • Best way to remote restart Ubuntu from Windows machine

    - by robsoft
    Background: I'm looking to put a series of Ubuntu machines into retail locations, they're being used as dumb kiosks to show a series of slides onto large LCD panel TV screens. Once installed, they won't have a keyboard or mouse connected but will have a fixed IP on the local network. Everything is configured to auto-start, no automatic updates, no power saving etc - I think we're pretty-much good to go apart from one thing. I need the retail staff to be able to restart the boxes if a problem arises. We have VNC running (now that we've turned off desktop enhancements!) so that we can remotely get into the machines if we need to, but that's not something we would allow the retail staff to do. The machines are going to be physically 'out of the way' (probably in the ceiling space) so the power button is not easily accessible!. I'd like to have some means of allowing the retail staff to restart the Ubuntu machine, from the desktop of one of their Windows terminals. I don't really want to give them some kind of raw terminal access (the command line will frighten them!) and I don't want them to use VNC (as stated above). Ideally there would be an icon on the Windows desktop, they double-click it, reply to a simple 'are you sure?' prompt, and then the Ubuntu box is told to restart. The Windows side of that won't be a problem, we can write something using Delphi, Python & Qt4, whatever - it's the Ubuntu side of it I'm stuck with. Out of sight/view, could I have a Windows program open a terminal across the network and tell Ubuntu to restart? Is this what SSH could be used for (I have never set that kind of thing up). The Windows programming side isn't really an issue, it's just that I'm a total Ubuntu noob and don't know where to start from the platform point of view. The other thing we considered is also having the machine automatically restart itself at a set time each day (obviously out of store hours!). To me, that seems a bit unnecessary (though forcing a restart once a week/month might be worthwhile). Any thoughts or suggestions? Being able to restart the box on demand across the network is my prime requirement.

    Read the article

  • Indefinite loops where the first time is different

    - by George T
    This isn't a serious problem or anything someone has asked me to do, just a seemingly simple thing that I came up with as a mental exercise but has stumped me and which I feel that I should know the answer to already. There may be a duplicate but I didn't manage to find one. Suppose that someone asked you to write a piece of code that asks the user to enter a number and, every time the number they entered is not zero, says "Error" and asks again. When they enter zero it stops. In other words, the code keeps asking for a number and repeats until zero is entered. In each iteration except the first one it also prints "Error". The simplest way I can think of to do that would be something like the folloing pseudocode: int number = 0; do { if(number != 0) { print("Error"); } print("Enter number"); number = getInput(); }while(number != 0); While that does what it's supposed to, I personally don't like that there's repeating code (you test number != 0 twice) -something that should generally be avoided. One way to avoid this would be something like this: int number = 0; while(true) { print("Enter number"); number = getInput(); if(number == 0) { break; } else { print("Error"); } } But what I don't like in this one is "while(true)", another thing to avoid. The only other way I can think of includes one more thing to avoid: labels and gotos: int number = 0; goto question; error: print("Error"); question: print("Enter number"); number = getInput(); if(number != 0) { goto error; } Another solution would be to have an extra variable to test whether you should say "Error" or not but this is wasted memory. Is there a way to do this without doing something that's generally thought of as a bad practice (repeating code, a theoretically endless loop or the use of goto)? I understand that something like this would never be complex enough that the first way would be a problem (you'd generally call a function to validate input) but I'm curious to know if there's a way I haven't thought of.

    Read the article

  • ConsumeStructuredBuffer, what am I doing wrong?

    - by John
    I'm trying to implement the 3rd exercise in chapter 12 of Introduction to 3D Game Programming with DirectX 11, that is: Implement a Compute Shader to calculate the length of 64 vectors. Previous exercises ask you to do the same with typed buffers and regular structured buffers and I had no problems with them. For what I've read, [Consume|Append]StructuredBuffers are bound to the pipeline using UnorderedAccessViews (as long as they use the D3D11_BUFFER_UAV_FLAG_APPEND, and the buffers have both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_UNORDERED_ACCESS bind flags). Problem is: my AppendStructuredBuffer works, since I can append data to it and retrieve it from the application to write to a results file, but the ConsumeStructuredBuffer always returns zeroed data. Data is in the buffer, since if I change the UAV to a ShaderResourceView and to a StructuredBuffer in the HLSL side it works. I don't know what I am missing: Should I initialize the ConsumeStructuredBuffer on the GPU, or can I do it when I create the buffer (as I amb currently doing). Is it OK to bind the buffer with a UAV as described above? Do I need to bind it as a ShaderResourceView somehow? Maybe I am missing some step? This is the declaration of buffers in the Compute Shader: struct Data { float3 v; }; struct Result { float l; }; ConsumeStructuredBuffer<Data> gInput; AppendStructuredBuffer<Result> gOutput; And here the creation of the buffer and UAV for input data: D3D11_BUFFER_DESC inputDesc; inputDesc.Usage = D3D11_USAGE_DEFAULT; inputDesc.ByteWidth = sizeof(Data) * mNumElements; inputDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS; inputDesc.CPUAccessFlags = 0; inputDesc.StructureByteStride = sizeof(Data); inputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; D3D11_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &data[0]; HR(md3dDevice->CreateBuffer(&inputDesc, &vinitData, &mInputBuffer)); D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; uavDesc.Format = DXGI_FORMAT_UNKNOWN; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND; uavDesc.Buffer.NumElements = mNumElements; md3dDevice->CreateUnorderedAccessView(mInputBuffer, &uavDesc, &mInputUAV); Initial data is an array of Data structs, which contain a XMFLOAT3 with random data. I bind the UAV to the shader using the Effects framework: ID3DX11EffectUnorderedAccessViewVariable* Input = mFX->GetVariableByName("gInput")->AsUnorderedAccessView(); Input->SetUnorderedAccessView(uav); // uav is mInputUAV Any ideas? Thank you.

    Read the article

< Previous Page | 770 771 772 773 774 775 776 777 778 779 780 781  | Next Page >