Search Results

Search found 16640 results on 666 pages for 'computer tower'.

Page 557/666 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • How to become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Games at Work Part 1: Introduction to Gamification and Applications

    - by ultan o'broin
    Games Are Everywhere How many of you (will admit to) remember playing Pong? OK then, do you play Angry Birds on your phone during work hours? Thought about why we keep playing online, video, and mobile games and what this "gamification" business we're hearing about means for the enterprise applications user experience? In Reality Is Broken: Why Games Make Us Better and How They Can Change the World, Jane McGonigal says that playing computer and online games now provides more rewards for people than their real lives do. Games offer intrinsic rewards and happiness to the players as they pursue more satisfying work and the success, social connection, and meaning that goes with it. Yep, Gran Turismo, Dungeons & Dragons, Guitar Hero, Mario Kart, Wii Boxing, and the rest are all forms of work it seems. Games are, in fact, work taken so seriously that governments now move to limit the impact of virtual gaming currencies on the real financial system. Anyone who spends hours harvesting crops on FarmVille realizes it’s hard work too. Yet games evoke a positive emotion in players who voluntarily stay engaged with games for hours, day after day. Some 183 million active gamers in the United States play on average 13 hours per week. Weekly, 5 million of those gamers play for longer than a working week (45 hours). So why not harness the work put into games to solve real-world problems? Or, in the case of our applications users, real-world work problems? What’s a Game? Jane explains that all games have four defining traits: a goal, rules, a feedback system, and voluntary participation. We need to look at what motivational ideas behind the dynamics of the game—what we call gamification—are appropriate for our users. Typically, these motivators are achievement, altruism, competition, reward, self-expression, and status). Common game techniques for leveraging these motivations include: Badging and avatars Points and awards Leader boards Progress charts Virtual currencies or goods Gifting and giving Challenges and quests Some technology commentators argue for a game layer on top of everything, but this layer is already part of our daily lives in many instances. We see gamification working around us already: the badging and kudos offered on My Oracle Support or other Oracle community forums, becoming a Dragon Slayer implementor of Atlassian applications, being made duke of your favorite coffee shop on Yelp, sharing your workout details with Nike+, or donating to Japanese earthquake relief through FarmVille, for example. And what does all this mean for the applications that you use in your work? Read on in part two...

    Read the article

  • Real-Time Multi-User Gaming Platform

    - by Victor Engel
    I asked this question at Stack Overflow but was told it's more appropriate here, so I'm posting it again here. I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. What sort of platforms are available to develop this sort of game? I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. I will be developing for iOS devices at first. So any suggestions would be greatly appreciated. I think I need to establish the architecture first before proceeding with this project. Note that darbase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either.

    Read the article

  • Duplicate ping response when running Ubuntu as virtual machine (VMWare)

    - by Stonerain
    I have the following setup: My router - 192.168.0.1 My host computer (Windows 7) - 192.168.0.3 And Ubuntu is running as virtual machine on the host. VMWare network settings is Bridged mode. I've modified Ubuntu network settings in /etc/netowrk/interfaces, set the following config: iface eth0 inet static address 192.168.0.220 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 Internet works correctly, I can install packages. But it gets weird if I try to ping something I get this: PING belpak.by (193.232.248.80) 56(84) bytes of data. From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded From 192.168.0.1 icmp_seq=1 Time to live exceeded 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=250 time=17.0 ms 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=249 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=248 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=247 time=17.0 ms (DUP! ) 64 bytes from belhost.by (193.232.248.80): icmp_seq=1 ttl=246 time=17.0 ms (DUP! ) ^CFrom 192.168.0.1 icmp_seq=2 Time to live exceeded --- belpak.by ping statistics --- 2 packets transmitted, 1 received, +4 duplicates, +6 errors, 50% packet loss, ti me 999ms rtt min/avg/max/mdev = 17.023/17.041/17.048/0.117 ms I think even more interesting are the results of pinging the router itself: stonerain@ubuntu:~$ ping 192.168.0.1 -c 1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. From 192.168.0.3: icmp_seq=1 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=6.64 ms --- 192.168.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 6.644/6.644/6.644/0.000 ms But if I set -c 2: ... 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=254 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=253 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=252 time=13.5 ms (DUP!) 64 bytes from 192.168.0.1: icmp_seq=1 ttl=251 time=13.5 ms (DUP!) From 192.168.0.3: icmp_seq=2 Redirect Network(New nexthop: 192.168.0.1) 64 bytes from 192.168.0.1: icmp_seq=2 ttl=254 time=7.87 ms --- 192.168.0.1 ping statistics --- 2 packets transmitted, 2 received, +256 duplicates, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 6.666/10.141/13.556/2.410 ms Pinging host machine on the other hand works absolutely correctly: no DUPs, no errors. What seems to be the problem and how can I fix it? Thank you.

    Read the article

  • How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x?

    - by TorakTu
    How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x ? WHAT DID WORK Installed original AMD64 version of Ubuntu 12.04 ISO image. Burned DVD and installed which shown kernel 3.2.0-23 to begin with. Got 5.1 surround sound working. Got ATI ( Now its AMD ) video drivers installed for my Radeon HD R6870 Video card from AMD's website. fglrxinfo came up and reported as normal. THE PROBLEM Kernel 3.2.0.x kept locking up so I tried higher kernel versions. But ATI / AMD Drivers do not install on any kernel Above 3.2.0.x WHAT I HAVE TRIED I have gone over this tutorial many times ( https://help.ubuntu.com/community/BinaryDriverHowto/ATI ) and it doesn't work on ANY kernel except 3.2.0.x. The problems I am having here are that the ATI / AMD drivers working for the 12.04 Precise with kernel 3.2.0-23 and 24, But the computer kept locking up. Although all my games would work, the lock ups were random and were constant. So I looked all over the web for 3 days trying to find an answer and the lock up issue was said to just update the kernel. So I did. Have tried many kernels. All of them .. no lock ups. BUT the Restricted AMD drivers from the AMD website will not install. And none of the OpenSource AMD drivers have EVER installed no matter what Kernel or version I tried. EXAMPLE OUTPUT OF 3D TYPE OF ERRORS Javax.media.opengl.GLException: glXGetConfig failed: error code GLX_NO_EXTENSION at com.sun.opengl.impl.x11.X11GLDrawableFactory.glXGetConfig(X11GLDrawableFactory.java:651) at com.sun.opengl.impl.x11.X11GLDrawableFactory.xvi2GLCapabilities(X11GLDrawableFactory.java:350) at com.sun.opengl.impl.x11.X11GLDrawableFactory.chooseGraphicsConfiguration(X11GLDrawableFactory.java:174) at javax.media.opengl.GLCanvas.chooseGraphicsConfiguration(GLCanvas.java:520) at javax.media.opengl.GLCanvas.<init>(GLCanvas.java:131) at haven.HavenPanel.<init>(HavenPanel.java:68) at haven.HavenPanel.<init>(HavenPanel.java:78) at haven.MainFrame.<init>(MainFrame.java:182) at haven.MainFrame.main2(MainFrame.java:306) at haven.MainFrame.access$100(MainFrame.java:34) at haven.MainFrame$7.run(MainFrame.java:360) at java.lang.Thread.run(Thread.java:722) And of course this is what fglrxinfo shows : X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 EDIT : I forgot to mention that I DID look at this post over the last few days and it did not help.

    Read the article

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • Problem upgrading 11.04

    - by Krazy_Kaos
    I've been trying to upgrade my ubuntu 11.04 desktop computer, but when I click on the ugrade button: I get this error: I've tryied to change my repositories, but it changes nothing in the error((on the "setting new software channel"). Can someone point me in the right direction? This is my sources.list: # deb http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb-src http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb cdrom:[Ubuntu 9.04 _Jaunty Jackalope_ - Release i386 (20090421.3)]/ jaunty main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty main restricted multiverse universe ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty-updates main restricted multiverse universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb-src http://pt.archive.ubuntu.com/ubuntu/ jaunty-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu natty partner deb-src http://archive.canonical.com/ubuntu natty partner deb http://us.archive.ubuntu.com/ubuntu/ natty-security main restricted multiverse universe deb http://us.archive.ubuntu.com/ubuntu/ natty-proposed restricted main multiverse universe # deb http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick # deb-src http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick deb http://extras.ubuntu.com/ubuntu natty main #Third party developers repository

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • SSH from external network refused

    - by wulfsdad
    I've installed open-ssh-server on my home computer(running Lubuntu 12.04.1) in order to connect to it from school. This is how I've set up the sshd_config file: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for #Port 22 Port 2222 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH #LogLevel INFO LogLevel VERBOSE # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net Banner /etc/sshbanner.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes #specify which accounts can use SSH AllowUsers onlyme I've also configured my router's port forwarding table to include: LAN Ports: 2222-2222 Protocol: TCP LAN IP Address: "IP Address" displayed by viewing "connection information" from right-click menu of system tray Remote Ports[optional]: n/a Remote IP Address[optional]: n/a I've tried various other configurations as well, using primary and secondary dns, and also with specifying remote ports 2222-2222. I've also tried with TCP/UDP (actually two rules because my router requires separate rules for each protocol). With any router port forwarding configuration, I am able to log in with ssh -p 2222 -v localhost But, when I try to log in from school using ssh -p 2222 onlyme@IP_ADDRESS I get a "No route to host" message. Same thing when I use the "Broadcast Address" or "Default Route/Primary DNS". When I use the "subnet mask", ssh just hangs. However, when I use the "secondary DNS" I recieve a "Connection refused" message. :^( Someone please help me figure out how to make this work.

    Read the article

  • How to Add a Taskbar to the Desktop in Ubuntu 14.04

    - by Lori Kaufman
    If you’ve switched to Ubuntu from Windows, it may take some time to get used to the new and different interface. However, you can easily incorporate a familiar Windows feature, the Taskbar, into Ubuntu to make the transition easier. A tool called Tint2 provides a bar at the bottom of the Ubuntu Desktop that resembles the Windows Taskbar. We will show you how to install it and make it start every time you log into Ubuntu. NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise. Press Ctrl + Alt + T to open a Terminal window. To install Tint2, type the following line at the prompt and press Enter. sudo apt-get install tint2 Type your password at the prompt and press Enter. The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter. When the installation has finished, close the Terminal window by typing “exit” at the prompt and pressing Enter. Click the Search button at the top of the Unity bar. Start typing “startup applications” in the Search box. Items that match what you type start displaying below the Search box. When the Startup Applications tool displays, click the icon to open it. On the Startup Applications Preferences window, click Add. On the Add Startup Program dialog box, enter a name for the startup application. This name displays in the list on the Startup Applications Preferences window. Type “tint2” in the Command edit box, enter a description in the Comment edit box, if desired, and click Add. Tint2 is added as a startup program and will start every time you log into Ubuntu. Click Close to close the Startup Applications Preferences window. Log out and log back in to make the Taskbar available on the desktop. You do not need to reboot the computer for this change to take effect. Now, when you minimize a program, an icon for it displays on the Taskbar at the bottom of the screen, just like the Taskbar in Windows. If you decide that you don’t want the Taskbar to display every time you log into Ubuntu, you can uncheck the Tint2 startup program on the Startup Applications Preferences window. You don’t need to delete it from the list.

    Read the article

  • Big-name School for Undergrad Students

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • What are these errors when I try to "make" the driver of my wireless network?

    - by Tom Brito
    I got got a wireless to usb adapter, and I'm having some trouble to install the drivers on Ubuntu. First of all, the readme file say to use the "make" command, and I already got errors: $ make make[1]: Entering directory `/usr/src/linux-headers-2.6.35-22-generic' CC [M] /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c: In function ‘rtl8192_usb_probe’: /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12325: error: ‘struct net_device’ has no member named ‘open’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12326: error: ‘struct net_device’ has no member named ‘stop’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12327: error: ‘struct net_device’ has no member named ‘tx_timeout’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12328: error: ‘struct net_device’ has no member named ‘do_ioctl’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12329: error: ‘struct net_device’ has no member named ‘set_multicast_list’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12330: error: ‘struct net_device’ has no member named ‘set_mac_address’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12331: error: ‘struct net_device’ has no member named ‘get_stats’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12332: error: ‘struct net_device’ has no member named ‘hard_start_xmit’ make[2]: *** [/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o] Error 1 make[1]: *** [_module_/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-22-generic' make: *** [all] Error 2 /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/ is the path where I copied the drivers on my computer. Any idea how to solve this? (I don't even know what the error is...) update: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 03 serial: 78:e3:b5:e7:5f:6e size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:42 ioport:d800(size=256) memory:fbeff000-fbefffff memory:faffc000-faffffff memory:fbec0000-fbedffff *-network DISABLED description: Wireless interface physical id: 2 logical name: wlan0 serial: 00:26:18:a1:ae:64 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=802.11b/g

    Read the article

  • Are closures with side-effects considered "functional style"?

    - by Giorgio
    Many modern programming languages support some concept of closure, i.e. of a piece of code (a block or a function) that Can be treated as a value, and therefore stored in a variable, passed around to different parts of the code, be defined in one part of a program and invoked in a totally different part of the same program. Can capture variables from the context in which it is defined, and access them when it is later invoked (possibly in a totally different context). Here is an example of a closure written in Scala: def filterList(xs: List[Int], lowerBound: Int): List[Int] = xs.filter(x => x >= lowerBound) The function literal x => x >= lowerBound contains the free variable lowerBound, which is closed (bound) by the argument of the function filterList that has the same name. The closure is passed to the library method filter, which can invoke it repeatedly as a normal function. I have been reading a lot of questions and answers on this site and, as far as I understand, the term closure is often automatically associated with functional programming and functional programming style. The definition of function programming on wikipedia reads: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. and further on [...] in functional code, the output value of a function depends only on the arguments that are input to the function [...]. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming. On the other hand, many closure constructs provided by programming languages allow a closure to capture non-local variables and change them when the closure is invoked, thus producing a side effect on the environment in which they were defined. In this case, closures implement the first idea of functional programming (functions are first-class entities that can be moved around like other values) but neglect the second idea (avoiding side-effects). Is this use of closures with side effects considered functional style or are closures considered a more general construct that can be used both for a functional and a non-functional programming style? Is there any literature on this topic? IMPORTANT NOTE I am not questioning the usefulness of side-effects or of having closures with side effects. Also, I am not interested in a discussion about the advantages / disadvantages of closures with or without side effects. I am only interested to know if using such closures is still considered functional style by the proponent of functional programming or if, on the contrary, their use is discouraged when using a functional style.

    Read the article

  • How do I set image position in conky

    - by realitygenerator
    I copied and modified an existing .conkyrc file from the ubuntu forum and I'm trying to place the LinuxMint logo in a specific position Below are my conkyrc file and the screenshot # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type desktop own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont URW Gothic:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left #alignment top_right #alignment bottom_left #alignment bottom_right. alignment top_middle # Gap between borders of screen and text gap_x 10 gap_y 10 #Display temp in fahrenheit temperature_unit fahrenheit #Choose which screen on which to display # stuff after 'TEXT' will be formatted on screen TEXT $color ${color green}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine LinuxMint 11 "Katya" (Oneric) ${image ~/Conky/Logo_Linux_Mint.png -s 80x60 -f 86400} ${color green}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${hwmon temp 1} $cpubar ${cpugraph 000000 ffffff} NAME PID CPU% MEM% ${top name 1} ${top pid 1} ${top cpu 1} ${top mem 1} ${top name 2} ${top pid 2} ${top cpu 2} ${top mem 2} ${top name 3} ${top pid 3} ${top cpu 3} ${top mem 3} ${top name 4} ${top pid 4} ${top cpu 4} ${top mem 4} ${color green}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Root: ${fs_free_perc /}% ${fs_bar 6 /}$color hda1: ${fs_free_perc /media/sda1}% ${fs_bar 6 /media/sda1}$color ${color green}NETWORK (${addr eth1}) ${hr 2}$color Down: $color${downspeed eth1} k/s ${alignr}Up: ${upspeed eth1} k/s ${downspeedgraph eth1 25,140 000000 ff0000} ${alignr}${upspeedgraph eth1 25,140 000000 00ff00}$color Total: ${totaldown eth1} ${alignr}Total: ${totalup eth1} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color green}LOGGING ${hr 2}$color ${execi 30 tail -n3 /var/log/messages | awk '{print " ",$5,$6,$7,$8,$9,$10}' | fold -w50} ${color green}FORTUNE ${hr 2}$color ${execi 120 fortune -s | fold -w50} I want to put the mint logo right after the word (oneric). Any help would be greatly appreciated.

    Read the article

  • Paranoid management, contractor checking work [closed]

    - by user833345
    Just wanted to get some opinions and experiences on an issue I'm having at work. First, a little background. I've been working at a company for some time (past any probation periods) and rewriting a horrendous system. No tests, incomplete and broken functionality everywhere, enough copypasta to feed a small village, redundant code, more unused SQL tables than used ones and terrible performance. I've never seen such bad code, pretty much all of it is worthy of being posted on TheDailyWTF. The company has been operating for a number of years and have had a string of bad developers working on this system. I made a call on rewriting instead of refactoring since I judged it to be less work overall and decided that the result will address the requirements more appropriately, since the central requirement is to have a future-proof system for the next decade with plenty of room to scale up. Refactoring would have entailed untangling a huge ball of yarn and at the same time integrating it with a proper foundation or building a foundation from scratch. I've introduced the latest spiffy framework, unit & functional testing, CI, a bug tracker and agile workflow to the environment. I've fixed most of the performance issues of the old system (there were no indexes on any of the tables, for example). I've created an automated deployment process for the old system. The CTO has been maintaining the old system while I have been building the new one and he has been advising management that everything is being done as per best practices. However, management is hiring a contractor to come in and verify my work. In my experience, this is unprecedented. I can understand their reasoning to an extent, since they've had bad luck in the past, but can't help but feel somewhat offended at the fact that they distrust two senior developers who have been working with them for some time enough that a third party is being brought in. And it's not just me who is under watch - people's emails are constantly checked, someone had a remote desktop application installed on their computer of which I was asked to check the usage logs to try to determine if they were stealing sensitive data and there are CCTV cameras in one of the rooms. It's the first time I've decided to disable my Skype history at work. Am I right to feel indignant here? Has anyone else ever encountered such a situation? If so, how did it work out in the end? Was it worth sticking around? Should I just find another job?

    Read the article

  • Live-Ubuntu 12.04 ran fine, now stopped booting!

    - by user89743
    I've seen similar problems to this several times in the forum, but mine is a bit different, so the other posts I saw were no help to me. When I boot Ubuntu 12.04 64-bit from live-SD-card (3GB persistence) I suddenly get this error: (initramfs) mount: mounting /dev/loop0 on //filesystem.squashfs failed: Invalid argument Can not mount /dev/loop/0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs (it says I can type "help" for commands, but I don't know anything about how to go from there, totally new to linux) The reason I say my case is different is because my Ubuntu worked fine for over a week, even pretty fast, and now this problem happened. Before that I used to run my live ubuntu from USB sticks but that was slower (especially when booting which took 15 minutes from USB stick!). Also I kept getting the same above problem after a while when booting and had to re-create a live USB linux several times. Installing on harddrive is not an option because my harddrive has physical damage and getting a replacement will take a while, therefore I can only use live-USB or live-SD-card Ubuntu. As I said I used Ubuntu without problems for more than a week, before that as well for several weeks on USB sticks, but the above problem occured sooner or later. This time I paid attention to when it happened: I was rebooting my computer (HP 620 laptop, 4 GB RAM, 64 bit system) from SD flash card and when I was booting I selected F6 and then the first option "no acpi" or something like that...I had used it before and noticed it slowed down the time it took Linux to use. This time it caused this error. Now even when I boot normally/default I get this error. Now I'm accessing Ubuntu from my USB stick without persistence file, when I check my SD card, all the files mentioned in the error message are there and the filesystem.squashfs is 691.2 MB so nothing seems to have been deleted by accident. (I have already made many changes/downloaded programs to my SD card persistent Ubuntu and would hope to loose them, since downloading is expensive for me, and since the problem seems to re-occur...) Can anyone help me, preferably without having to create another startup disk on my SD card? I'm totally new to this. Sorry for the long posts, just didn't know what info is relevant and what isnt! Kon

    Read the article

  • Failed to start up after upgrading software in ubuntu 10.10

    - by Landy
    I asked this question in SuperUser one hour ago, then I know this community so I moved the question here... I've been running Ubuntu 10.10 in a physical x86-64 machine. Today Update Manager reminded me that there are some updates to install and I confirmed the action. I should had read the update list but I didn't. I can only remember there is an update about cups. After the upgrading, Update Manager requires a restart and I confirmed too. But after the restart, the computer can't start up. There are errors in the console. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [xxx]usb 1-8: new high speed USB device using ehci_hcd and address 3 [xxx]usb 2-1: new full speed USB device using ohci_hcd and address 2 [xxx]hub 2-1:1.0: USB hub found [xxx]hub 2-1:1.0: 4 ports detected [xxx]usb 2-1.1: new low speed USB device using ohci_hcd and address 3 Gave up waiting for root device. Common probles: - Boot args (cat /proc/cmdline) - Check rootdelay=(did the system wait long enough) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory ALERT! /dev/sda1 does not exist. Dropping to a shell! BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell(ash) Enter 'help' for a list of built-in commands. (initramfs)[cursor is here] At the moment, I can't input anything in the console. The keyboard doesn't work at all. What's wrong? How can I check boot args or "root=" as suggested? How can I fix this issue? Thanks. =============== PS1: the /dev/sda1 is type ext4 (rw,nosuid,nodev) PS2: the /dev/sda1 can be mounted and accessed successfully under SUSE 11 SP1 x64. PS3: From this link, I think the keyboard doesn't work because the USB driver is not loaded at that time.

    Read the article

  • Professional immigration

    - by etranger
    Hello all, Does anyone here have a practical advice on professional relocation from Russia to Europe? The reasons behind making such a decision are far beyond the subject, perhaps, so I'll stick to the practical part. Having done some of the "common stuff" for finding a job, I am now facing two serious problems: I am a "dual-class" person, with university degree in marketing, and multiple years of self-studied computer competence (hence my writing here). Have professional experience in both areas. I don't currently hold a European work permit. From what I can see, this results in normal HR person throwing out my CV as either being "overqualified" or "too much trouble with making the permit". I do have the skills and character to start my own business, but it requires start-up capital that I don't have, over the last years I had to pay high bills for medical treatment of my family member, who had deceased. Now, I'm almost out of debts. As you can probably guess, English is not a problem, and I'm open to new languages, but first steps of entering the market, or the society, is the problematic part. I live close to Norway, and am trying to get some professional contacts there, but it hasn't got me any practical perspective so far. Any advice is greatly appreciated. EDIT: I am currently making my living off web site development, and occasional consulting services both in IT and marketing. For purely geographic reasons I'm dealing with clients that reside in the same city where I live, pop. 350 000. Being quite local, market requirements for web sites are simple and stable — clients need to control navigation, write articles in a word-like editor, upload illustrations and place ad banners, all with no additional programming. As many web developers do, I'm using my own content management system that fits these expectations. I have also started developing a newer version of this system that has better support for international environments, but I'm too distant from the real market demand in Europe to speak of the right track here. Technically it's based on php/mysql and uses xslt for templating. It allows for quick website deployment, and has architectural neatness, lack of which made me abandon similar opensource solutions (Joomla and the like). Deploying time from rasterized design proofs is normally under 6-8 working hours, don't know how that compares to the world practice. EDIT 2: Can anyone share what Norwegian (Scandinavian) web solutions market currently demands?

    Read the article

  • 5 Step Procedure for Android Deployment with NetBeans IDE

    - by Geertjan
    I'm finding that it's so simple to deploy apps to Android that I'm not needing to use the Android emulator at all, haven't been able to figure out how it works anyway (big blinky screen pops up that I don't know what to do with). I just simply deploy the app straight to Android, try it out there, and then uninstall it, if needed. The whole process (only step 4 and 5 below need to be done for each deployment iteration, after you've done steps 1, 2, and 3 once to set up the deployment environment), takes a few seconds. Here's what I do: On Android, go to Settings | Applications. Check "Unknown sources". In "Development", check "USB debugging". Connect Android to your computer via a USB cable. Start up NetBeans IDE, with NBAndroid installed, as described yesterday. and create your "Hello World" app. Right-click the project in the IDE and choose "Export Signed Android Package". Create a new keystore, or choose an existing one, via the wizard that appears. At the end of the wizard (would be nice if NBAndroid would let you set up a keystore once and then reuse it for all your projects, without needing to work through the whole wizard step by step each time), you'll have a new release APK file (Android deployment archive) in the project's 'bin' folder, which you can see in the Files window. Go to the command line (would be nice if NBAndroid were to support adb, would mean I wouldn't need the command line at all), browse to the location of the APK file above. Type "adb install helloworld-release.apk" or whatever the APK file is called. You should see a "Success" message in the command line. Now the application is installed. On your Android, go to "Applications", and there you'll see your brand new app. Then try it out there and delete it if you're not happy with it. After you've made a change in your app, simply repeat step 4 and 5, i.e., create a new APK and install it via adb. Step 4 and 5 take a couple of seconds. And, given that it's all so simple, I don't see the value of the Android emulator, at all.

    Read the article

  • Do I need to go to a big-name university?

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • Why can't I install Microsoft Office 2007 in Ubuntu 11.04?

    - by DK new
    I am very new to Ubuntu and only just getting a hang of it, and my questions might sound stupid especially because I am a learner in terms of techie things as well. So because of the nature of work where everyone uses stupid Windows and Microsoft, I need to have access to MS Office 2007/2010 as documents with too many tables or images open all haywire in Libre Office (which has otherwise been great!). I have been reading up about installing MS Office through WINE/PlayonLinux, but have been unsuccessful so far. I downloaded a MS Office 2007 package from Pirate Bay, which I extracted into a folder. I tried numerous different ways to install through WINE and PlayonLinux, but will discuss the one which seems to be getting me somewhere. http://www.webupd8.org/2011/01/how-to-install-microsoft-office-2007-in.html ..... Initially, when I would click on the install button of MS Office, I get a message saying "The install location you selected does not have 1558MB free space. Free up space from the selected install location or choose a different install location". The install location in this case said "C:\Program Files\Microsoft Office", which confused me as I don't have drives named as C, Z etc. I went to configure WINE and under the drives tab, created a drive named A with the path location /media/cd025f16-433b-4a90-abb6-bb7a025d0450/. Also the space thing is confusing as I have at least 450GB of unused space on my computer. anyways, when I selected the A drive for installation, the installation starts, but soon I get the following error message, "Office cannot find Office.en-us\OfficeLR.Cab. Browse to a valid installation source" .... The part saying "OfficeLR.Cab" have said different things after the Office bit every time I have made an attempt. When I select the Office.en-us sub-folder or any other folder within the folder where MS Office 2007 is saved, it says "invalid source"! I have been trying to get this sorted since 15hrs now (addictive!) and have learnt loads of things in the process, but have not managed to crack it. It might be something stupidly simple I am not aware off that is stopping it. I would really appreciate some help! Thanks a lot.. Also I am still getting used to the language, so might have many questions Also I am using Ubuntu 11.04 (tag 11.04). Also I think I don't have windows -- when my friend installed Ubuntu on my new laptop which had Windows 7, he was trying to keep windows in a separate partition, but something happened and windows was not there! Looking forward to some support! Again thanks a lot

    Read the article

  • Wifi problems after upgrading to 13.10

    - by Simon
    I just upgraded to Ubuntu 13.10, but since the upgrade I don't have internet access via wifi anymore. I can: See networks Connect to a network Ping myself (localhost, 192.168.0.103) I can't: Ping others (including other devices on the same wireless network, including the gateway/router) Resolve hosts Access any other external resource, whether on my own network or on the internet Using Wireshark, I noticed my computer is continuously sending ARP-requests like "Who has 192.168.0.1 [which is the gateway]? Tell 192.168.0.103". It doesn't get any replies though. When I ping another IP-address for which it knows the mac-address (from cache), it turns out a packet loss of 90% occurs, and even if a packet manages to arrive it takes around 3000ms. The output of route -n is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 9 0 0 eth1 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Before upgrading, wifi worked fine. Using other devices, wifi still works fine.Resetting the router didn't help. Ethernet still works after upgrading. Any suggestions? Update: I'm using the wl driver. Here's the relevant output of some commands: lspci | grep Wireless 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) cat /etc/modprobe.d/blacklist.conf [...] blacklist mac80211 blacklist brcm80211 blacklist cfg80211 blacklist lib80211_crypt_tkip blacklist lib80211 blacklist b43 cat /etc/rc.local sudo modprobe -r lib80211 sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_wep.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_tkip.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_ccmp.ko sudo modprobe wl exit 0 The last lines are probably how I got wireless working after the previous upgrade (wireless has been a problem after each upgrade). Update 2: added information about the exact hardware below. The hardware is an integrated device, so I ran lspci -nn | grep -i network. The output is: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01)

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Oracle Could Lead In Cloud Business Apps Within Year

    - by Richard Lefebvre
    Below is the reprint from an article, writen by By Pete Barlas, Investor's Business Daily, published on Investorscom: Oracle (ORCL) is all but destined to become the largest seller of cloud business-software applications, analysts say, and perhaps within a year. What that means in the long run is much debated, though, as analysts aren't sure whether pricing competition might cut into profit or what other issues might develop in the fast-emerging cloud software field. But the database leader, which is either No. 1 or 2 to SAP (SAP) in business apps overall, simply has the size and scope to overtake current cloud business-app leader, Salesforce.com (CRM), analysts say. Oracle rolled out its first full suite of cloud applications on June 6. Cloud computing lets companies store data and apps on the Internet "cloud" and access it quickly and easily. The applications run the gamut of customer relationship management software to social networking sites for employees, partners and customers. For longtime software giants like Oracle, the cloud is a big switch. They get the great bulk of revenue from companies and other enterprises buying or licensing software that the customers keep on their own computer systems. Vendors also get annual maintenance fees. Analysts estimate Oracle is taking in a mere $1 billion or so a year from cloud-based software sales and services now. But while that's just a sliver of the company's $37 billion in sales last year, it's already about a third of the total sales for Salesforce, which is expected to end this year with some $3 billion in revenue. Operates In 145 Countries Oracle operates in more than 145 countries vs. about 70 for Salesforce. And Oracle has far more apps than Salesforce. Revenue doesn't equate to profit, but it's inevitable that huge Oracle will become the largest seller of cloud applications, says Trip Chowdhry, an analyst for Global Equities Research. "What Oracle has is global presence," he said. "They have two things driving the revenue: breadth of the offering and breadth of the distribution. You put those applications in those sales reps' hands and you get deployments not in just one country but several countries." At the June 6 event, Oracle CEO Larry Ellison emphasized that his company could and would beat Salesforce.com in head-to-head battles for customers. Oracle makes software to help companies manage such tasks as customer relationships, recruiting, supply chains, projects, finances and more. That range gives it an edge over all rivals, says Michael Fauscette, an analyst for research firm IDC.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >