Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 286/832 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Can't resolve localhost on Mac OS X Server

    - by iainbeeston
    I have a server running OS X Server 10.5 and it can't resolve localhost to 127.0.0.1. When I try ping this is what happens: ping localhost ping: cannot resolve localhost: Unknown host SSH and web browsers get similar results (uknown host). If I try using 127.0.0.1 or the ip address assigned on the LAN all of the above work. Here's the contents of my /etc/hosts file: cat /etc/hosts ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost I have no local DNS service running. Does anyone have any idea why this might be happening or how I can fix it?

    Read the article

  • XP Mode (Windows Virtual PC for Windows 7) no longer requires hardware virtualisation - hurrah !

    - by Liam Westley
    Windows Virtual PC (aka XP Mode) When XP Mode was released, it insisted on hardware virtualisation being present on your CPU and enabled in the BIOS.  Given that Windows Virtual PC was based on an improved Virtual PC 2007, which provided hardware virtualisation as a user selectable option, I did wonder why on earth Microsoft thought this was a good idea.  Not only do many people not have a CPU with hardware virtualisation support, some manufacturers don't provide a BIOS option to enable this setting, especially on laptops - yes Sony, Toshiba and Acer, I'm looking at you. Dumb and dumber This issue became a double whammy; not only was Microsoft a bit dumb on not supporting Windows Virtual PC without hardware virtualisation, your hardware manufacturer was also dumb in not supporting the option in the BIOS. Microsoft update to Windows Virtual PC Belatedly, Microsoft has seen the problem with this hardware virtualisation requirement and has now released a new version of Windows Virtual PC that works without hardware virtualisation.  This is really good news for those with older (or limited) CPUs and rubbish BIOS firmware. You can details of how to download the new versions of XP Mode here, http://blogs.msdn.com/virtual_pc_guy/archive/2010/03/18/windows-virtual-pc-no-hardware-virtualization-update-now-available-for-download.aspx And there is also an explanation of why the hardware virtualisation requirement was in place for previous releases, http://blogs.msdn.com/virtual_pc_guy/archive/2010/03/18/windows-virtual-pc-now-without-the-need-for-hardware-virtualization.aspx

    Read the article

  • Continuous Physics Engine's Collision Detection Techniques

    - by Griffin
    I'm working on a purely continuous physics engine, and I need to choose algorithms for broad and narrow phase collision detection. "Purely continuous" means I never do intersection tests, but instead want to find ways to catch every collision before it happens, and put each into "planned collisions" stack that is ordered by TOI. Broad Phase The only continuous broad-phase method I can think of is encasing each body in a circle and testing if each circle will ever overlap another. This seems horribly inefficient however, and lacks any culling. I have no idea what continuous analogs might exist for today's discrete collision culling methods such as quad-trees either. How might I go about preventing inappropriate and pointless broad test's such as a discrete engine does? Narrow Phase I've managed to adapt the narrow SAT to a continuous check rather than discrete, but I'm sure there's other better algorithms out there in papers or sites you guys might have come across. What various fast or accurate algorithm's do you suggest I use and what are the advantages / disatvantages of each? Final Note: I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave, convex, round, or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if I choose an algorithm that breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • htaccess/cPanel 301 redirects not working for add-on domain

    - by Clemens
    I've already looked at many samples and tutorials how to set up those 301 redirects on Apache and can't figure out why only the second one is working: Options +FollowSymlinks RewriteEngine on #doesn't work: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^page-still-exists.htm$ "http://www.new.com/new-target-page.htm" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^page-does-no-longer-exist.htm$ "http://www.new.com/" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^folder/otherpage.htm$ "http://www.new.com/" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^/?$ "http://www.new.com/" [R=301,L] #doesn't work: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^somepage.htm$ "http://www.old.com/some-page.htm" [R=301,L] I have no idea why only the second one is working. The only difference I can see is, that in the second case the old page does no longer exist on the old domain. But whenever I want to redirect any still existing page from the old domain to the new domain the page on the old domain is still used. Any input is much appreciated because this is slowly driving me crazy :) EDIT: I added the complete htaccess file. EDIT 2: So I removed almost all redirects and currently my htaccess looks like this: Options +FollowSymlinks RewriteEngine on RewriteCond %{HTTP_HOST} ^old\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.old\.com$ RewriteRule ^(.*)$ "http\:\/\/www\.new\.com\/$1" [R=301,L] The only redirect that is working is the simple one from old.com to new.com. A redirect like old.com/page.htm to new.com or even new.com/page.htm is not working. And actually I really don't know where this redirect is actually coming from... Can a 301 really be so complicated?

    Read the article

  • Windows Explorer is blank

    - by Scott Mitchell
    I am using Windows 7 Utlimate x64. Once a week, or so, when I boot up n the morning and launch Windows Explorer it shows up blank, as the following screen shot show. Clicking on my Computer doesn't load anything. Interestingly, I can go the the Address bar at the top and type in a folder name. This brings up that folder's files and subfolders, but as I drill around the tree of folders on the left only shows the immediate folder and not its siblings. There's no plus icon to expand the folder, etc. My usual "solution" is to reboot, which typically brings everything back to normal, but this is a frustrating remedy. Any idea what's going on and how to fix it? Some Googling turned up this discussion, but the remedy was to uninstall a particular piece of software that I don't have installed (Virtual Clone Drive). Thanks

    Read the article

  • mysql jdbc got ArrayIndexOutOfBoundsException when database name length = 9

    - by Thang Hoang
    this code below will throw : Exception in thread "main" java.sql.SQLException: Unable to connect to any hosts due to exception: java.lang.ArrayIndexOutOfBoundsException: 40 mysql 5.1, jdbc driver 5.1.21 if I change connection string to any database have name's lengh != 9, it will pass to print 'connected'. or I create other database as '123456789' it throw same exception. I connect to other database on amazon s3, that have same name length, it throw java.lang.ArrayIndexOutOfBoundsException: 43. this database version is 'mysql Ver 14.14 Distrib 5.5.28, for debian-linux-gnu (i686) using readline 6.2 ' any idea of this weird mysql behavior, thanks public class MysqlConnection { public static void main(String[] args) throws Exception { Connection conn = null; String userName = "root"; String password = "123456"; String url = "jdbc:mysql://localhost:3306/test12345"; Class.forName ("com.mysql.jdbc.Driver").newInstance (); conn = DriverManager.getConnection (url,userName, password); System.out.println ("Connected"); } }

    Read the article

  • MySQL per-database replication?

    - by LucasBr
    So, my problem is interesting: we want to migrate from one server to another. We made a master-slave replication, but my boss came with the idea to make migration one database at a time. So he asked me to setup at the new server another MySQL instance, let the slave almost as-is and make the new instance be the new master incrementally, one database at a time. Is it possible, that is, can I transfer the database 'x' from old master to new master and just tell slave to synchronize 'x' at the new master from now on? I've read at this old thread ( Mysql Replication - are per-database threads possible? ) that this was not possible at that time. This can be done now? Thanks! Lucas Bracher.

    Read the article

  • How to consistently enable screen sharing with iChat

    - by Joel
    I am unable to consistently get screen sharing in iChat to work. When I select an online buddy, under the Buddies menu the options "Share my screen with Bob" and "Ask to Share Bob's Screen" are disabled. Sometimes starting a chat with that person will enable the screen sharing but often not. Once its enabled it works fine but I have no idea what the key is to getting it enabled. It seems fairly random when it works. This is over the public internet using Google Talk. Both ends are running OSX 10.5.

    Read the article

  • So Singletons are bad, then what?

    - by Bobby Tables
    There has been a lot of discussion lately about the problems with using (and overusing) Singletons. I've been one of those people earlier in my career too. I can see what the problem is now, and yet, there are still many cases where I can't see a nice alternative - and not many of the anti-Singleton discussions really provide one. Here is a real example from a major recent project I was involved in: The application was a thick client with many separate screens and components which uses huge amounts of data from a server state which isn't updated too often. This data was basically cached in a Singleton "manager" object - the dreaded "global state". The idea was to have this one place in the app which keeps the data stored and synced, and then any new screens that are opened can just query most of what they need from there, without making repetitive requests for various supporting data from the server. Constantly requesting to the server would take too much bandwidth - and I'm talking thousands of dollars extra Internet bills per week, so that was unacceptable. Is there any other approach that could be appropriate here than basically having this kind of global data manager cache object? This object doesn't officially have to be a "Singleton" of course, but it does conceptually make sense to be one. What is a nice clean alternative here?

    Read the article

  • Cannot apply unity --reset after modifying files

    - by Alex Cline
    So I have an idea of what I did wrong, I am just not sure how to fix it. I used the Unity Glass mod: http://www.omgubuntu.co.uk/2012/07/unity-glass-offers-refined-new-look-for-the-unity-launcher After removing it, I cannot reset unity and it does not work. Even after purging Unity and reinstalling it, I cannot seem to replace the missing files. $unity --reset WARNING: Unity currently default profile, so switching to metacity while resetting the values unity-panel-service: no process found Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...no Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1c00027 Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/arcline/.compiz/session/10b624e5c8f98c5325134625607758338300000051770001" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done (compiz:7038): Gtk-WARNING **: Theme parsing error: gnome-panel.css:28:11: Not using units is deprecated. Assuming 'px'. (compiz:7038): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Segmentation fault (core dumped)

    Read the article

  • Lots of TIME_WAIT connections in netstat (Windows Server 2008)

    - by Rhys Causey
    I'm having some issues on a Windows 2008 server with some network connections not going through. For instance, in a web application on the server, we need to open a socket connection to another server, and this fails sometimes with the following message: Only one usage of each socket address (protocol/network address/port) is normally permitted I looked up the error, which led me to this page: http://msdn.microsoft.com/en-us/library/aa560610(v=bts.20).aspx, which indicates that it might be TCP/IP port exhaustion. When I perform netstat -n, I get tons of TIME_WAIT connections on port 80. Does anyone have any idea what could be causing this?

    Read the article

  • Salary and profit distribution in game industry?

    - by drowneath
    A couple years ago, I started a group/team of passionate people in game development. I was the one who had the idea to form a group that will (hopefully) later be a company/real studio. I was the one who gathered the people too. We are consisting of only a few people (< 10 people) and everyone has their own specialties in game development. For some reason, everyone agreed to make me the executive director of the group. We are currently focused in creating flash games and mobile games. Until now, we have created a few free game titles and gained profit from some freelancing projects. Since I have no prior experience in running a "company", I decided to split the profit we gained from projects equally regardless of the member's role in the company, as long as he/she is involved in and have contributed a decent amount of work to the development of the project. My questions are: What is the correct way to split profit that is gained from freelance projects that are developed together? Once we've released enough products and ready to register our company legally, what about the salary? What benefits do I have from being the founder and the director? I'm not a control-freak, but I want everything to be clear.

    Read the article

  • Clarity of the cloud with Microsoft Learning Experience.

    - by Testas
      while waiting for the Superbowl, I thought I would write this..... 2014 will not only see the release of a new version of SQL Server, but also accompanying this is the release of courses and certification tracks from Microsoft Learning Experience – formerly Microsoft Learning -- that will support the education of SQL Server and related technologies. The notable addition in the curriculum, is substantial material on cloud and big data features that pertain to data and business intelligence. There are entire module/chapters that are dedicated Power BI, SQL Azure and HDInsight. Certifications and courses from Microsoft can get stick – sometimes fair and sometimes unfairly. Whilst I am a massive advocate of community to get information and education. Microsoft’s new courses will bring clarity to the burning topics of the moment and help you to understand the capabilities of Power BI and HDInsight. From a business intelligence perspective there will be three courses: 20463C: Data warehousing in SQL Server 2014 20466C: data models and reports in SQL Server 2014 20467A: Designing Self-Service Business Intelligence and Big Data Solutions These are not the exact titles of the course, but will be confirmed prior to the release. And if you have already completed the SQL Server 2012 or 2008 curriculum, there is an upgrade course from 10977A: Upgrading business intelligence skills from 2008 to 2014. Again this is not the exact title, but these should give you an idea. Look out for announcements from Microsoft Learning Experience….   CHRIS

    Read the article

  • Trouble with Windows 7

    - by vtimmerm
    Hi, I'm trying to virtualize an application for Windows 7 but am running into trouble: Application will run fine in Windows 7 if installed in the base. When it is virtualized, it will run on XP, but not on Windows 7. I have tried this in three ways: Captured on XP with ThinApp 4.0 Captured on XP with ThinApp 4.5 Captured on Windows 7 with ThinApp 4.5 Even when captured on Windows 7, it will not run on Windows 7 but will run on XP. When captured with a rival product, Altiris SVS, the virtualized app runs fine on Windows 7. Any idea's what could cause this behaviour? Looking at the trace file, you see that they are different right from the start when comparing Windows 7 and XP tracefiles. What could cause it to go in completely different directions? (And why does the tracefile on Windows 7 say: Operating System Unknown? Does everybody have that on Windows 7 even with 4.5?) The error message is: "Object variable or with block variable not set". Thanks, Vincent

    Read the article

  • Linux Mint 13 64bit Cinnamon and Oracle Virtualbox: 3d acceleration crash

    - by Stephen Swensen
    I've recently got an interest in Linux. After some research, it looks like Linux Mint 13 cinnamon is hot and I thought I'd try it out... I'm running Windows 7 64bit and have experience with Oracle Virtual Box. So I thought it would be a good idea to try out Linux Mint inside Virtual Box. I download Linux Mint 13 64bit Cinnamon and set it up in my VM player... Nothing special about my settings. Except Linux Mint 13 Cinnamon requires 3d acceleration, and when I enable that, it crashes whenever I open the Menu in the bottom left corner of the guest OS (and some other times too)... I've seen other mentions of this problem on the web, but no solutions. Is there a solution? If not, any suggestions short of installing the OS on a partition for trying out this OS (I'm not interested in the LIve mode either - I'd really like to get the full feel for it)?

    Read the article

  • Looking ahead at 2011-with Paul Greenberg

    - by divya.malik
    It is almost the end of 2010, rather unbelievable how fast this year has gone by. It is always interesting to read what our CRM gurus have to say about the coming year. So here is CRM luminary, Paul Greenberg’s  forecast for 2011. Mobile CRM growth accelerates. CRM and “Social” companies continue to integrate their capabilities as a few suites begin to emerge. Social “rankings”, as a measure of customer engagement, will become a standard public measure. Analytics exhibits the most significant growth of any area with Customer Insight apps leading the way. Marketing apps mature with social marketing becoming an integral part of the application offering. Customer service begins to redefine itself with greater emphasis on service communities, web self-service and customer knowledge capture. Knowledge management replaces enterprise content management as a core requirement for large businesses. Customer experience reasserts itself loudly as the core of CRM and SCRM - This one is kind of a no-brainer in a way. Co-creation and customer driven product innovation becomes more than just an advanced idea. Microsoft Azure emerges as a true cloud provider at the level of Amazon as cloud computing considers its rise to becoming a primary technology infrastructure. Application marketplaces will become commonplace as companies look to platform providers to fill ecosystem needs, not just CRM. I do encourage you to read the details of his forecasts, that are split into two blog posts. For Part I click here and for Part II, click here. Technorati Tags: oracle,siebel CRM,scrm,paul greenberg

    Read the article

  • Why Does My Website Redirect me to my localhost?

    - by Noah Brainey
    Alight, my website has some issues that I'm not sure what's causing them. Visit this page http://online-file-sharing.net/tos.html and click one of the bottom footer links... it redirects you to your localhost in the address bar. I have no idea why it does this. I'm hosting this website on my own server, which is this computer, and using Xampp. If this information helps. Anyways any help would be greatly appreciated! I'm also using DYNDNS as my nameservers.

    Read the article

  • [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied.

    - by shah
    Web server and SQL server both are running on the different machine. The below is the connection string that we are using to connect MS SQL database from classic ASP web application. set oConn = server.createobject("ADODB.Connection") oConn.open "PROVIDER=SQLOLEDB;Data Source=xxx.xxx.x.xx,1433;Network Library=DBMSSOCN;Initial Catalog=databasename;User ID=xxxxx;Password=xxxxx;" No idea why it's loosing the database connection in the middle of uploading the page. Here is error message that we got. Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied. Already verified SQL server 2005 remote connection settings and default port number. * Remote connections are enabled in SQL Server as per http://support.microsoft.com/kb/914277 Please help. Thanks,

    Read the article

  • ASP.NET MVC: MVC Time Planner is available at CodePlex

    - by DigiMortal
    I get almost every week some e-mails where my dear readers ask for source code of my ASP.NET MVC and FullCalendar example. I have some great news, guys! I ported my sample application to Visual Studio 2010 and made it available at CodePlex. Feel free to visit the page of MVC Time Planner. NB! Current release of MVC Time Planner is initial one and it is basically conversion sfrom VS2008 example solution to VS2010. Current source code is not any study material but it gives you idea how to make calendars work together. Future releases will introduce many design and architectural improvements. I have planned also some new features. How MVC Time Planner looks? Image on right shows you how time planner looks like. It uses default design elements of ASP.NET MVC applications and jQueryUI. If I find some artist skills from myself I will improve design too, of course. :) Currently only day view of calendar is available, other views are coming in near future (I hope future will be week or two). Important links And here are some important links you may find useful. MVC Time Planner page @ CodePlex Documentation Release plan Help and support – questions, ideas, other communication Bugs and feature requests If you have any questions or you are interested in new features then please feel free to contact me through MVC Time Planner discussion forums.

    Read the article

  • Cronjob as root?

    - by Rob
    I'm having a bit of a problem with cronjobs for backups. I've set up the following in sudo crontab -e (not under personal account): 0 1 * * * /backups/dobackup /backups/dobackup contains this: #!/bin/sh touch ITRAN tar -cvpjf /backups/$(date +%d.%m.%Y)_backup.tar.bz2 --exclude=/backups --exclude=/proc --exclude=/lost+found --exclude=/sys --exclude=/mnt --exclude=/media --exclude=/dev / The backup file is created, but the file ITRAN is not. Also, the backup file is vastly smaller than expected: -rw-r--r-- 1 rjrudman root 371620259 2012-06-21 12:39 21.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1023211449 2012-06-22 18:00 22.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1512785 2012-06-23 01:00 23.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1023272455 2012-06-24 22:41 24.06.2012_backup.tar.bz2 -rw-r--r-- 1 rjrudman root 1514027 2012-06-25 01:00 25.06.2012_backup.tar.bz2 The backups with much larger file sizes are created by manually running sudo /backups/dobackup. It seems the cronjob is failing at some point.. but I have no idea how to debug this issue or where to start. Any ideas? Running ubuntu 10.04

    Read the article

  • Encode real-time dvb-s stream using mencoder

    - by karatchov
    My satellite receiver can stream the mpeg-2 video/audio output through lan. Using mencoder, I'm trying to build a script to encode and save the stream in real time with my Core2Duo 1.8 Ghz. Right now, I'm using a single pass, it produces good quality for a video rate of 800Kb/s, but takes more then 95% of CPU power, thus making a lot of frameskips is the computer is used while encoding. mencoder -o -vf lavcdeint -oac mp3lame -lameopts abr:q=2:aq=2 -ovc x264 -ffourcc avc1 -x264encopts crf=25:me=hex:subq=9:frameref=2:nocabac:threads=auto -mc 3 So, I'm considering using a 2-pass encoding to alleviate the processor and record 100% of the stream. But I have no idea how to start. For the info: Standard Stream: mpeg-2 720*576 25fps HD Stream: 1920*1080 50fps (this is not my goal to record it, but it will be super cool if I could)

    Read the article

  • shared hosting: ssl domain receives all server's ssl requests, google gets it wrong

    - by pixeline
    This website is hosted on a shared server. I've recently had my hosting provider secure our website using SSL (https://domain.com instead of http://domain.com). Ever since then, all https requests sent to the server are redirected to my website. https://otherdomain.com leads to a warning, then, if you continue, to my website. Ok, my fault, i should have known SSL means 1 IP, otherwise, this thing can happen. But... Google Search results for my website's target keywords is now displaying these websites above my own, even though they have nothing even remotely related to the target keywords! already done: provide canonical url in the html page. told the problem to the server manager, who tells me it's normal but he'll look for a solution. This was one week ago, no answer since. I have no idea why Google is providing these https urls: i thought someone would have to submit them, or have them inside an html page in order for Google to actually index dummy https domains, but i see no reason why someone would do that. Any suggestion on how to solve this situation? Golive is in one week and SEO looks really bad because of that.

    Read the article

  • gitweb on Ubuntu Server as Location/Directory instead of Virtual Host

    - by mbx
    Since DynDNS no longer resolves subdomains for free I have use gitweb on a subdir of the apache2. Usual suspects such as Pro Git suggest something like <VirtualHost *:80> ServerName gitserver DocumentRoot /srv/gitosis/repositories/ <Directory /srv/gitosis/repositories/> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgi DirectoryIndex gitweb.cgi </Directory> </VirtualHost> I tried various variations using Location and Directory tags with different attribute combinations without any notable success. My first Idea was close to the following Alias /gitweb /srv/gitosis/repositories <Location /gitweb> AuthType Basic AuthName "gitweb Repository view" AuthUserFile /etc/apache2/gitweb.passwd Require valid-user SSLRequireSSL SetEnv GITWEB_CONFIG /etc/gitweb.conf AddHandler cgi-script cgi DirectoryIndex /usr/lib/cgi-bin/gitweb.cgi </Location> Apache is in the gitosis group, the repositories are readable and executable for that group. So, what is the indended way to get websvn run on Ubuntu 10?

    Read the article

  • Ubuntu mount hard drive confusing

    - by Fresheyeball
    I'm new to linux and have a home server set up running ubuntu. In the ui its very easy to mount my additional internal harddrives. I just double click on them. Since I have made this server headless, I now need to mount via the command line. How can I replicate the very simple double click gui behavior? So far all the information I've found is very complex. Ubuntu auto generated folders for each hd in under /media and I can see the harddrives under /dev but have no idea which is which as the hardware is identical between them. I also don't know how they are formated. Thanks in advance for any advice you have.

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >