Search Results

Search found 6852 results on 275 pages for 'ascension systems'.

Page 180/275 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Template syntax for users - is there a right way to do it?

    - by RickM
    Ok, I'm in the middle of building a saas system, and as part of that, the hosted clients need to be able to edit certain layout templates, baqsically just html, css and javascript files. I'm obviously going to be wanting to use a template syntax here as it would be dumb to let people execute PHP code, so in this instance template syntax does need to be used. I know that in the grand scale of things, this is a very minor thing, but what template syntax do you use, and why? Is there one that's considered better than others? I've seen all sorts being used with no real consistency, for example: Smarty Style: {$someVar} {foreach from="foo" item="bar"} {$bar.food} {/foreach} ASP Style: {% someVar %} {% foreach foo as bar %} {% bar.food %} {% endforeach %} HTML Style: <someVar> <foreach from="foo" item="bar"> <bar:food> </foreach> PyroCMS/FuelPHP "LEX" Style: {{ someVar }} {{ foreach from="foo" item="bar" }} {{ bar:food }} {{ endforeach }} Obviously these arent 100% accurate (for example, LEX is used alongside PHP for loops), and are only to give you an example of what I mean. What, in your opinion would be the best one (if any) to go with. I ask this bearing in mind that people using this are likely to be novice users. I did look around at a bunch of hosted CMS and E-Commerce systems as these seem to make use of user-editable templates, and most seem to be using some form of their own syntax. I should note that whatever style I end up going with, it will be with a custom template handler due to the complexity of the system and how template files are stored. Plus I'd not want to touch the likes of Smarty with a barge pole!

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Ubuntu install can't find hard drives

    - by Casey Hungler
    I recently got a Dell Inspiron Special Edition 7720 computer. I am trying to install Ubuntu along side Windows. When I use the WUBI installer, the installation of Ubuntu works as long as I do not boot into Windows; if I boot into Windows, when I go back into Ubuntu, I am given a variety of error messages which claim to have corrupt or missing kernel/root directory, etc. I have been working with this problem for about a week, and have reinstalled Ubuntu MANY times. So far, I have eliminated all of the following problems: Corrupt WUBI installation (Downloaded multiple times, used on other systems), I have tried using a CD and a flash drive, both of which work on other computers. I know that no program within Ubuntu is creating the problem. I know that others have successfully installed Ubuntu on a computer with my operating system (Windows 7 SP1). This is a much shortened version of the original question, which has been up for about 5 days, and included a more detailed description of the problem, but left everyone clueless as to the source of this problem. When I spoke with the Dell service technician who came over today to replace my keyboard, he suggested that the driver for my HDD was so new that it was not compatible with the current version of Ubuntu. His reasoning is as follows: 1) During an install from a flash drive or CD, where I am supposed to get the option to wipe my system or create a dual boot, I get a window that asks me to select a hard drive partition, but none are listed. 2) This model of computer was made public in June of this year, while Ubuntu was released in April Adopting this theory, it would seem to me that the WUBI install fails after booting into Windows because Ubuntu can no longer find the files that it needs to load. Does this theory seem at all plausible to anyone? I just want to install Ubuntu and have it stay on my computer. I don't care how I put it there, I just need it to work, so I would TRULY appreciate any advice or suggestions anyone could give. Thanks so much for your time and support!!!

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

  • How do you keep down your urge to learn many things [closed]

    - by devsundar
    One of the difficulties i have is to lower my urge to learn new things (Languages, tools, frameworks etc.). I know it's good to stay the bleeding edge, but at the same time i want to learn things properly. I really see that i need to strike a balance between staying bleeding edge and knowing things properly. For example: Before choosing Arch (Desktop), Ubuntu(Server) and Knoppix(Portable) -- depending on situation -- as favourite distributions. Virtually i have tried all popular linux distributions. You name any popular linux (Redhat, Ubuntu, Arch, Suse, Knoppix, Slax, Slackware) i have tried it for some time. In fact i have spent few years experimenting the operating systems. Before choosing Python, Javascript (nodejs). I have tried all the languages i cameacross Scala, Haskell, Erlang, Ruby, Python, Perl, Scheme. Same applies for database. All popular db RDBMS (Oracle, Mysql, Postgres, SQLite[Favourite] etc) and NoSQL (Mongo, Couch, Neo4j etc.). Advantages i see: We get a overall picture of the technologies/tools/languages. It's useful to select the right tool for the job. We develop a taste and choose the One we like. Disadvantages: I feel that i spend somuch time and see a need to strike a balance. In summary, for e.g. If i see a blog post in HackerNews about CofeeScript i will try it out irrespective of what i am currently learning (Say Haskell). I switch back to learning Haskell, then again i see DART i check it out. And this continues.. Effectively i take more time to learn Haskell, but learnt about other new stuff on the way. The quetion i have is how do you strike a balance between staying bleeding edge and learning properly.

    Read the article

  • Can't connect nonlocally after 12.10 upgrade

    - by user101815
    I've just upgraded one of my systems from 12.04 to 12.10. Now I can't connect on that system beyond my local network. Connections within the local network seem to work fine, and I can make nonlocal connections from other machines (like the one I'm asking this question from). I suspect that some routing information has been messed up, but I don't know where to look for it. It's not a nameserver problem -- pinging outside sites by their IP addresses doesn't work either. I have another laptop next to this one, also running Kubuntu 12.10. On the one that can't connect, arp produces no output. On the other one, it produces 192.168.0.1 ether 00:23:69:fa:ce:ae C wlan0 On the working machine, the output of netstat starts with some tcp entries. On the nonworking one, those entries are absent. I asked this question on the Ubuntu forum but haven't gotten any answers there. One further complication: since the troublesome machine has no outside connection, it's extremely difficult to download anything to it. For what it's worth, "ping 8.8.8.8" produces "connect: Network is unreachable". Update: after a lot of fiddling, I have my external world back. I don't know what the key action was, but the first indication of progress was that "ping 8.8.8.8" worked. At that point I still didn't have a working nameserver, so external URLs didn't work. But I did this (based on an online post, of course): sudo dpkg-reconfigure resolvconf and answered Yes to all prompts. That did the trick!! Apparently my problem was unique, or close to it, since I couldn't find any online references to it: local net working, remote net not working, including explicit IP addresses. So I suppose that if no one else has this problem, no one cares about the solution!!

    Read the article

  • Do you think we will ever settle on a "standard" platform? [closed]

    - by GazTheDestroyer
    The recent explosion of phone platforms has depressed me (slightly), and made me wonder if we will ever reach any kind of standard for presentation? I don't mean language or IDE. Different languages have different strengths and I can see that there may always be a need for disparity, although I do note that languages are merging somewhat in functionality, with traditional imperitive languages like C++ now supporting things like lambdas. What I'm really talking about is a common presentation mechanism. Before smart phones and tablets came along, the web seemed to be finally becoming a reasonable platform for presenting an application that was globally accessible, not just geographically, but by platform too. Sure there are still (sometimes infuriating) implementation differences and quirks, but if you wrote a decent site you knew it could be accessed on anything from a PC to a phone to a C64 running the right software. "Write Once Run Anywhere" seemed to finally be becoming a reality. However, in the last few years we've seen an explosion of mobile operating systems, and the ubiquitous "app". A good site is no longer enough, you need a native "app", and of course we have a sudden massive disparity in OS, language, and APIs needed to write them as each battles for supremecy. It's kind of weird how the cycle of popularity goes. Mainframes with terminals - thin client. PC - thick client. Web browser - thin client. Phone app - thick(ish) client. I just wonder if you think there will ever be a global standard for clients, or whether the "shiny and different" cycle will always continue along with the battle of the tech du jour.

    Read the article

  • Setting up a network between a host and guest virtual machine

    - by anonymous
    (I'm running ubuntu server 12.04 on virtual box) I'm trying to transfer a file (scp) from my laptop to one of the directories of a virtual machine. I tried sharing folders, but that failed. I'm a bit of a networking newbie. I've looked at like 20-30 pages. Here's one: http://www.howtoforge.com/moving-files-between-linux-systems-with-scp I followed those steps exactly. My problem is that when I try using scp, it just hangs. I'm also not sure which network interface to configure (eth0, eth1?) in the guest OS. Another (significant?) detail is that the inet address of eth0 is 10.0.2.15 instead of something like 192.168.x.y. I've enabled the bridge adapter and the host-only adapter. Both the laptop and guest VM have openssh-server installed. I'm not sure what to do at this point. Is there a better place to ask about this?

    Read the article

  • Oracle???????????47??????????

    - by user758881
    Oracle???2014?5?31???,??????,40?Oracle???????47????Oracle??? Oracle Accelerate ????? ?Oracle 2014?????????47???????????????????????Oracle????,??Oracle Financials Cloud, Oracle Sales Cloud ? Oracle Service Cloud –???? Oracle CX Cloud, ?? Oracle Human Capital Management (HCM) Cloud. ???Oracle Accelerate??????????????????? ???????????????????, ??, ???, ??, ??, ???????????????????,????????????????? ???????????????????????????????,Oracle??????????????????????Oracle???Oracle????????????? l   ??????????,???????????????——Oracle ???? eVerge Group, Certus Solutions, Presence of IT, CSolutor, Grant Thornton, ? KBACE Technologies ?????Oracle HCM Cloud ?Oracle Accelerate ????????????????????????,???????????????????,???????????????? l   ???????????????????????????——DAZ, Inc., Frontera Consulting?Inoapps ?????Oracle Financials Cloud????????????????????????? l   ?????????????????????——Capricorn Ventis, Enigen, Fellow Consulting, Solveso Interactive, CSolutor, Birchman Consulting,BPI On Demand, Business Technology Services (BizTech)? eVerge Group?????Oracle CX Cloud?????????????????????????? ??,Oracle???????????????????????????????????: l   ?????? BPI On Demand ??????????????????????Oracle Sales Cloud????? ?????????? ·          “??????????????????? ???Oracle Financials Cloud?Oracle Accelerate???? ?????????????????????????????????????????????????”–Phil Wilson, Business Development & Alliances,Inoapps ·          “KBACE?Oracle Accelerate???????KBACE ????????????????????????????????????????KBACE? Oracle Accelerate????,??Oracle HCM???,????????????????????”–Mike Peterson, President & COO, KBACE Technologies ·          “???????Oracle Financials Cloud,??????????????????????????????????????????????Oracle Accelerate????,????????????????????”—Deborah Arnold, President, DAZ Systems, Inc. ·          “????????????Oracle ERP Cloud????Oracle Accelerate?????????????????” - Sean Moore, Principal. C3Biz ·          “????,????Oracle HCM????????????????????????????eVerge Group??Oracle HCM????Oracle Accelerate???????????????????????” - John Peketz, Vice President, Marketing, eVerge Group

    Read the article

  • Would having an undergraduate certificate in Computer Science help me get employed as a computer programmer? [on hold]

    - by JDneverSleeps
    I am wondering how would employers perceive the Universtiy Certificate in Computing and Information Systems offered by Athabasca University (a distance education institution... The university is legit and accredited by the Government of Alberta, Canada). I already have a BSc in Statistics from University of Alberta (a classic brick and mortar public university in Alberta, Canada)...so I can state in my resume that I have a "university degree"..... Luckily, I was able to secure a very good employment in my field after the graduation from the U of A. The main reason why I am interested in taking the certificate program through Athabasca is because knowing how to program can increase the chance for promotion in my current job. I also believe that if something turns out bad in my current job and if I ever need to look for a new place to work, having the certificate in computer science will help me get employed as a computer programmer (i.e. my choice for the new job wouldn't be restricted to the field of Statistics). Athabasca University is claiming that the certificate program is meant to be equivalent to the undergraduate minor in computing science. I carefully looked at the certificate's curriculum and as far as I am concerned, the certificate program does have the same level of rigour as the undergraduate minor in Computer Science programs offered by other Canadian universities. I am also confident that the certificate program will get me to pick up enough skills/background to start a career as a computer programmer. The reasons why I am not 100% sure on getting the certificate is worth the tuition are: Athabasca University is a distance education institution (accredited by government but still) The credential that I will receive is "university certificate", not a "undergraduate degree" Do you think it's a good idea for me to pursue the certificate, given the two facts above? again, I already have my Bachelor's degree - although it is not in CS Thanks,

    Read the article

  • WiFi problems on several Ubuntu installations

    - by Rickyfresh
    Okay this is the first time I have ever had to ask a question as usually the Ubuntu community have answered everything already but on this occasion there are many people asking for the answer but not one good solution has become available so far so someone please help or I will have to install Windows on my sons and my girlfriends PCs and that would be a disaster as I am trying to help convince people to move from Windows. I installed 12.04 on three computers on the same day. Dell Inspiron (Works Perfect) Toshiba Satellite Home built Desktop The Dell works perfect but the other two either keep losing connection to the wireless Internet and even when they are connected they stop connecting to web sites, for some reason it searches Google fine but will not connect to web sites when a link is clicked. So far people have recommended in other forums: Removing network manager and installing wicd (didn't solve it) Changing the MTU in the wireless settings (didn't solve it) All sorts of messing about with Firefox settings (this doesn't solve it and even if it did this would leave most average PC users scratching their heads and wishing they had stuck to windows) The problem exists on two very different machines and different wireless cards so I doubt its a driver or hardware issue, also many other Ubuntu users are having the same problem with a vast array of different machines and wireless cards. Can someone please give a good solution to this as its going to turn a lot of people away from Ubuntu if they cannot get this sorted. I would give some PC specs but the two machines are vastly different and the other people complaining of this problem also have very different systems all showing the same problem.

    Read the article

  • Internal speakers do not work

    - by Nikcefo
    I have a new (from scratch, not update) installation of Ubuntu 12.10 on my notebook, Asus A3Ac (It is based on Intel Centrino - Pentium M with full duplex Intel HDA codec). In older versions of Debian-based systems Intel HDA audio didn't work correctly. Alsamixer display wrong outputs and inputs (more than notebook really have). In clean installations internal speakers were playing, but they didn't mute when headphones was plugged in. There was a solution (propably not the best but working) - edit as root /etc/modprobe.d/alsa-base.conf and add a line "options snd-hda-intel model=z71v position_fix=1". After restart it worked correctly (alsamixer displayed correct devices and internal speakers were muted after I plugged in headphones). It was also working in Ubuntu 12.04. In Ubuntu 12.10 I have another problem. The alsamixer in default (don't have to edit alsa-base.conf) display correct outputs and inputs but internal speakers don't working if the headphones isn't plugged in. I have to manually disable "Auto-Mut" option in alsamixer, then the internal spakers works (but of course they don't mute when the headphones are pluged in). Thanks for any idea how to fix it. I'm not sure if it is a bug or it's caused by a "specific hardware". Tomas

    Read the article

  • What exactly are Link Relation Values?

    - by bckpwrld
    From REST in Practice: Hypermedia and Systems Architecture: For computer-to-computer interactions, we advertise protocol information by embedding links in representations, much as we do with the human Web. To describe a link's purpose, we annotate it. Annotations indicate what the linked resource means to the current resource: “status of your coffee order” “payment” and so on. We call such annotated links hypermedia controls, reflecting their enhanced capabilities over raw URIs. ... link relation values, which describe the roles of linked resources ... Link relation values help consumers understand why they might want to activate a hypermedia control. They do so by indicating the role of the linked resource in the context of the current representation. I interpret the above quotes as saying that Hypermedia control contains both a link to a resource and an annotation describing the role of linked resource in the context of the current representation. And we call this annotation ( which describes the role of linked resource ) a link relation value. Is my assumption correct or does the term link relation value actually describe something different? Thank you

    Read the article

  • Software Error Basics

    Software Error Basics Who Causes Errors?   Software errors are caused by: ·    End-users ·    Programmers ·    Computer Systems   What Causes Errors?   Software errors are caused by: ·    Programmer Mistakes and Assumptions ·    Invalid data ·    Unexpected User Interactions ·    Missing Resources o  Files o  Databases o  Network Connectivity ·    Poor network connection ·    Insufficient Permissions   Where Do Errors Occur?   Software errors can occur anywhere code is executed:   ·    Desktop PC ·    Laptop PC ·    Server ·    Tablet PC ·    Mobile Phone ·    Any Device that can execute software   When Do Errors Occur?   Software errors occur when source code is being compiled (Compile-Time) or executed (Run-Time).  

    Read the article

  • Artists and music - Need Help Deciding on a CMS

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own smaller gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implemented all of these features, but I want to use an existing content management system for all of these features so that future developers can, hopefully, more easy extend the website. And also, so that I don't have to provide too much documentation. I have never developed a website using an external CMS like Drupal or Wordpress and after seeing hours of tutorial videos of both systems, I still can't make up my mind on whether i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can imagine that staff members would also want to create content using iPhone or android based mobile devices, but this is not a required feature. Can someone, with experience, please tell me about their experiences with larger projects like this? The site will have approximately 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

  • NFS mount of /var/www to OS X

    - by ploughguy
    I have spent 2 hours trying to create an NFS mount from my Ubuntu 10.04 LTS server to my OS X desktop system. Objective: three way file compare between the code base on the Mac, the development system on the local Linux test system, and the hosted website. The hosted service uses cpanel so I can mount a webdisk - easy as pie - took 10 seconds. The local Ubuntu box, on the other hand - nothing but pain and frustration. Here is what I have tried: In File Browser, navigate to /var/www/site and right-click. Select share this folder. Enter sharename wwwsite and a comment. Click button "Create Share". Message says - you can only share file systems you own. There is a message on how to fix this, but the killer is that this is sharing by SMB. It will change the LFs to CR-LFs which will affect the file comparison. So forget this option. In a terminal window, run shares-admin (I have not been able to convince it to give me the "Shared Folders" option in the System Administration window - Maybe it is somewhere else in the menu, but I cannot find it) define an NFS export. Enter the path /var/www/site, select NFS enter the ip address of the iMac and save. On the mac, try to mount the file system using the usual methods - finder, command line "mount" command - not found. Nothing. Tried restarting the linux box in case there is a daemon that needs restarting - nothing. So I have run out of stuff to do. I have tried searching the documentation - it is pretty basic. The man page documentation is as opaque as ever. Please, oh please, will someone help me to get this @38&@^# thing to work! Thanks for reading this far... PG.

    Read the article

  • QueryUnit 0.0.0.8 – Trust No One

    - by Davide Mauri
    Yesterday I’ve release an updated version of QueryUnit, the version 0.0.0.8. QueryUnit now supports AreNotEqual, Greater, and Less assertions and is more capable of managing strings results. I must say that I cannot live anymore without a proper Unit Testing of a BI solution. Just yesterday happened that one of the unit tests at a customer site failed showing a subtle situation where the release of a new version of custom application would have corrupted the source of BI data with a very low chance that someone would have noticed it before several days. It may happen when you have more the 15 systems that handles the data needed by your BI solution. The key message of this situation is “Trust No One”: if your data hasn’t passed quality testing it’s not trustable. Period. QueryUnit is now officialy an hero :) No superpowers still, but useful above all. http://queryunit.codeplex.com/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How can one find software development work that involves directly the final end user?

    - by RJa
    I've worked in software development for 15 years and, while there have been signficant personal achievements and a lot of experience, I've always felt detached from the man/woman-on-the-street, the every day person, how it affects their lives, in a number of ways: the technologies: embedded software, hidden away, stuff not seen by the everyday person. Or process technology supporting manufactured products the size of the systems, meaning many jobs, divided up, work is abstract, not one person can see the whole picture the organisations: large, with departments dealing with different areas, the software, the hardware, the marketing, the sales, the customer support the locations and hours: out-of-town business parks away from the rest of society, fixed locations, inflexible: 9-5 everyday This to me seems typical of the companies I worked for and see elsewhere. Granted, there are positives such as the technology itself and usually being among high calibre co-workers, but the above points frustrate me about the industry because they detach the work from its meaning. How can one: change these things in an existing job, or compensate for them? find other work that avoids these and connects with the final end user? Job designs tend to focus on the job content and technical requirements rather than how the job aims to fulfil end user needs, is meaningful.

    Read the article

  • Basic questions while making a toy calculator

    - by Jwan622
    I am making a calculator to better understand how to program and I had a question about the following lines of code: I wanted to make my equals sign with this C# code: private void btnEquals_Click(object sender, EventArgs e) { if (plusButtonClicked == true) { total2 = total1 + Convert.ToDouble(txtDisplay.Text); //double.Parse(txtDisplay.Text); } else if (minusButtonClicked == { total2 = total1 - double.Parse(txtDisplay.Text) } } txtDisplay.Text = total2.ToString(); total1 = 0; However, my friend said this way of writing code was superior, with changes in the minus sign. private void btnEquals_Click(object sender, EventArgs e) { if (plusButtonClicked == true) { total2 = total1 + Convert.ToDouble(txtDisplay.Text); //double.Parse(txtDisplay.Text); } else if (minusButtonClicked == true) { double d1; if(double.TryParse(txtDisplay.Text, out d1)) { total2 = total1 - d1; } } txtDisplay.Text = total2.ToString(); total1 = 0; My questions: 1) What does the "out d1" section of this minus sign code mean? 2) My assumption here is that the "TryParse" code results in fewer systems crashes? If I just use "Double.Parse" and I don't put anything in the textbox, the program will crash sometimes right?

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Why choose an established CMS as opposed to building one from scratch?

    - by SkonJeet
    A lot of my research over the next few weeks will be into different CMS's. I've already had a brief look at episerver and umbraco. While reading into these systems I can't help but think that providing content management features are achievable without learning the details and structure of many of these (rather large) CMS platforms. I have, in the past, been given projects whereby my role as a developer must be kept separate to that of an editor (makes sense). i.e. It was my task to develop the design and functionality of the site and my clients' job to update the content. I've achieved this by also implementing a sort of 'portal' on which there were a couple of pages that would accept text input and picture uploads etc. (basically, whatever content they wanted), record this new content to the database and then by design the code-behind would read all this from the database into relevant controls (repeaters for example). For me, this has been an effective enough way of my clients managing the content to deploy with my solutions. I know that I am wrong - and that CMS's are preferable to those that are built from the ground up - but other than the matter of cost, why?

    Read the article

  • Work @ Java shop. Tasked to redo intranet. I only know PHP - CTO says use that; Ops Says no - we have JSP, use that - Thoughts?

    - by Mackysback
    So I work as a project manager and was given the opportunity to redo the intranet and Internet site... I'd love to , great addition to my resume. However.... I am not familiar with java nor JSP... I am fairly proficient with PHP and MySQL ... Also my intention was to use a CMS that I have experience with, wordpress, drupal or joomla... The site would be very simple so I was thinking wordpress would be fine for this. I told management that I would need it to run on PHP and they gave me their blessing... Now the disconnect bw it and management ... I spoke with ops and they told me that we have java and JSP for that purpose although the IT guy did say "oh but I do know that php and mysql are gaining popularity" That said ... I do not understand much as far as systems architecture... We do self host and run websphere with IIS and jboss with tomcat. Any advice or suggestions? Thanks ! Is my request that unfeasible?

    Read the article

  • Hard drive skipped in boot

    - by Yasin
    Good evening. I just installed Ubuntu 12.04 using a USB, but right after the install, after restarting the machine, I get a message asking me to insert a bootable drive. My boot settings in Bios have the hard drive first, then DVD, then USB stick, and I have two systems installed, Windows 7 and Ubuntu 12.04. I suspected the hard drive got somehow disconnected internally, so I checked but everything was in place. I used the live USB to start Ubuntu, and I could see the hard drive and mount whatever partition I wanted. The one that contains the recently installed Ubuntu, looks the same. (It hasn't been deleted or anything). I'm not sure if this is a hardware problem or a loader(grub) problem, because the hard drive is visible. Only it isn't seen by the BIOS. My only means of internet connection is a USB modem, which doesn't work when I'm using the live USB, so I have can't download anything from the internet, in case someone asks. I also reinstalled Ubuntu 12.04, to no avail. This is my second problem with this laptop, and Ubuntu, and it's not even a week old. I hope this one gets solved. Thank you.

    Read the article

  • How should I structure the implementation of turn-based board game rules?

    - by Setzer22
    I'm trying to create a turn-based strategy game on a tilemap. I'm using design by component so far, but I can't find a nice way to fit components into the part I want to ask. I'm struggling with the "game rules" logic. That is, the code that displays the menu, allows the player to select units, and command them, then tells the unit game objects what to do given the player input. The best way I could thing of handling this was using a big state machine, so everything that could be done in a "turn" is handled by this state machine, and the update code of this state machine does different things depending on the state. However, this approach leads to a large amount of code (anything not model-related) going into a big class. Of course I can subdivide this big class into more classes, but it doesn't feel modular and upgradable enough. I'd like to know of better systems to handle this in order to be able to upgrade the game with new rules without having a monstruous if/else chain (or switch / case, for that matter). Any ideas? What specific design pattern other than MVC should I be using?

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >