Search Results

Search found 24708 results on 989 pages for 'internet filter'.

Page 652/989 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • What is the best drink to drink when you have read nonsense questions on programmers?

    - by stefan
    I am having a hard time deciding what drink to drink after I have read a nonsense question on programmers.stackexchange. It's either Beer och Whisky; The beer is nice since you can down it some what relaxed but some times I feel the need for something "stronger" because the question is so utterly nonsense and stupid. Every time I have read a stupid / nonsense question on programmers.stackexchange.com I've questioned myself why I didnt write some code instead. I couldve probably written countless lines of codes, together probably building a new Facebook or Linux by now. But instead I sacrificed my precious time reading questions that shouldn't have been posted on the internet. It really makes me frustrating, I guess that is why I am so often considering the whisky part instead of beer. Since beer will maybe not calm me down enough and then I have to take the whisky too, together it's a) slightly more expensive and b) more time consuming. So, what is the best drink?

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Javascript Rookie Question: Define Variables Inline

    - by Dylan Kinnett
    I'm proficient with HTML and CSS but I'm still pretty shaky when it comes to Javascript. That said, I've been able to build a site using the Internet Archive Book Reader, which relies on reader.js Here's a copy of one of my versions of reader.js https://gist.github.com/dylan-k/ed4efed2384e221d46cc It's a good site, but I find I have to repeat things a lot. Basically, I have one copy of reader.js for every page/book featured on the site. It seems there must be a better way. I re-use the script, making copies, just so that I can change lines 28, 80, 83, 84. Is there a way I could include just one copy of reader.js and then use a <script> tag to define these 4 lines for the individual pages?

    Read the article

  • Innovation on CS

    - by guiman
    Hi all, its been some time that i've been thinking about creating something that could help the web community and leave some mark, but the problem that there are no ideas poping out of my head. So 2 questions comes to my mind: First, how did projects like Twitter, Google or any other big project that had changed our way of living in the internet get started? Secondly, do we have to force it or just keep doing what we do best and wait to that idea that could change things? While writting this question, one quote come to mind: Imagination is more important than knowledge Albert Einstein

    Read the article

  • Uploading a file automatically for speed test?

    - by Abhi
    I am building a Web UI for a device for internet connection and one of the requirements in it is a speed test. I know the basic concept of how speed test works. A file is downloaded for a limited time then the same file is uploaded again and the speed is tracked at regular intervals. Downloading the file is not an issue, but how am I supposed to upload the file without the client knowing that the file is getting uploaded? I've read through a lot of documentation, but I'm still not able to get the answer to how I will upload the file from clients machine without asking him to select the file.

    Read the article

  • Assigning Static Public IP Address to Windows Server 2008

    - by Neeti
    Please help a newbie. I am new to windows server. I have an IBM server and I have installed Windows Server 2008 R2 on that. I am provided with a static IP address by my ISP. How I can assign that to my server? I have a webapplication hosted on the server which I require to access from the external world using internet browser. How can this be achieved? Please let me know if there are any tutorials or step by step guide for achieving what I am trying to.

    Read the article

  • Hadopi : Free refuse de financer la riposte graduée, estimée à plus de 70 millions d'euros

    Mise a jour du 24.03.2010 par Katleen Hadopi : Free refuse de financer la riposte graduée, estimée à plus de 70 millions d'euros Xavier Niel, le fondateur et actionnaire majoritaire de Free, s'est exprimé publiquement hier à propos de la loi Hadopi. Et pas en bien. L'homme d'affaires s'en est notamment pris à la riposte graduée (le volet répressif de la lutte contre le piratage, dont la sanction la plus forte est la suspension de l'accès à Internet). Techniquement, ce sera aux fournisseurs d'accès de transmettre les données personnelles d'un internaute jugé contrevenant, à la demande de la Haute Autorité. Il sera de plus demandé aux FAIs de surveiller les internautes dési...

    Read the article

  • How can I get shockwave flash to work on Ubuntu 12.04

    - by John Hill
    I've tried most of the things suggested on this site and others so far. It's kind of frustrating because this is a home computer and I mainly use it for browsing the internet and watching YouTube videos. When I try to install something that might fix it, it just downloads a bunch of files and says to extract them or whatever. I'm not sure what to do with the files after I extract them. That's probably the main issue: I don't have a lot of experience working with computers at this level. I'm used to Windows, which seems to make most software installs idiot-proof. Any suggestions are appreciated!!

    Read the article

  • Ceská obchodní banka, a.s. Upgrades to Oracle Database 11g On Time, On Budget and without Disrupting Business Operations

    - by jgelhaus
    You want the new features of the latest release, but upgrading a database is one of those things DBAs can "lose sleep" over.  Ceská obchodní banka, a.s."CSOB" needed to upgrade its production systems in the Czech Republic and Slovakia that supported 90 key applications for its retail, corporate, internet, and ATM services from Oracle Database 9i to Oracle Database 11g with simultaneous migration from Alpha processors/OpenVMS-based hardware to a Power7, AIX system. Oracle Consulting helped to complete the upgrade within schedule and budget, while meeting tight restrictions on downtime. Knowledge transfer by Oracle Consulting to the bank’s IT team has improved self-sufficiency in support and maintenance while the technical and advisory services of Oracle Consulting Expert Services continue to optimize performance and availability while lowering cost of ownership. Read how CSOB maximized the value of its investment in Oracle Database technology with an upgrade to Oracle Database 11g.

    Read the article

  • Xna model parts are overlying others

    - by Federico Chiaravalli
    I am trying to import in XNA an .fbx model exported with blender. Here is my drawing code public void Draw() { Matrix[] modelTransforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo(modelTransforms); foreach (ModelMesh mesh in Model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.World = modelTransforms[mesh.ParentBone.Index] * GameCamera.World * Translation; be.View = GameCamera.View; be.Projection = GameCamera.Projection; } mesh.Draw(); } } The problem is that when I start the game some model parts are overlying others instead of being behind. I've tried to download other models from internet but they have the same problem.

    Read the article

  • How to price code reviews to encourage good behavior?

    - by Chris Clark
    I work for a company that has a hosted .net internet application with many clients. Those clients often want to write customizations for our application. We have APIs to hook into the app, but the customizations themselves are written in .net. This is a shared, secure hosting environment and we have to code review these customizations before we can deploy them in our datacenter to ensure that they don't degrade performance, crash our servers, or open any security vulnerabilities. We charge for these code reviews. The current pricing model is simply a function of the number of lines of code. I think this is a bad idea for a variety of reasons, but primarily because, if we are interested in verifying that the code works as expected, we should be incentivizing good, readable code, not compaction. I would like to propose a pricing model that incorporates some, or all of the following as inputs: Lines of code Cyclomatic complexity Avg function length # of functions Are there any other metrics I should incorporate, or other ideas for how we can reasonably create pricing for code reviews that encourages safe and understandable code?

    Read the article

  • Can we set up svn server on a local computer without any network access?

    - by Aitezaz Abdullah
    I want to set up an SVN repository on my computer without any network access. I am working on a code without any collaborator, so I don't want it to be publicly available. I read the following post. http://stackoverflow.com/questions/6001445/local-source-control-repository-cross-platform but this post suggests using online svn repository services that give free repositories. In that case, my code will be publicly available (as is included in the terms of free plans). So I was wondering if I can set up a local server on my windows xp machine that only I access even when I don't have any internet connection?

    Read the article

  • Facebook and Gmail stop working after 10 minutes

    - by Julia
    I have problem with facebook and gmail only: It works fine and lets me log in, view photos and videos, read new messages, etc. But after 5-10 minuets it doesn't load at all This webpage is not available. The webpage at http://www.facebook.com/ might be temporarily down or it may have moved permanently to a new web address. More information on this error Below is the original error message Error 101 (net::ERR_CONNECTION_RESET): Unknown error After deleting cookies this problem disappears for 5-10 minuets, then I get the same error. It happens with Google Chrome and Firefox. Ping works fine. I have checked System-Preferences-Network Proxy it sets on Default - "Direct Internet Connection". Then I have run test (chrome://net-internals/#tests) and got some FAIL results Use system proxy settings Disable IPv6 host resolving Probe for IPv6 host resolving IPv6 is disabled.

    Read the article

  • Problem mounting USB Mobile Broadband dongle in Ubuntu 13.04

    - by Still A Learner
    I have been using Ubuntu 13.04 since last two months and I have problem mounting my USB dongle. I have Lenovo G-580 series laptop. It has 3 USB ports which are 3.0. When I attach dongle to one of the three USB ports the device gets mounted immediately but in the other two ports I have to reboot the OS while the dongle is attached to the laptop in order to mount it and connect to the internet. My dongle is of ZTE. I don't get what is going wrong.

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • Lending epub files for limited time to users

    - by JP Hellemons
    I am looking for components to build a digital library who lends people epub (ebooks) for about a week. It's like a digital version of the offline old public library. Now I have found several flash (pdf) file streaming solutions. But that would require an active internet link. And like the public library, you are able to take your books to the beach or pool on holiday abroad where you have no connection. So streaming is no option. The other file restriction method I have found was DRM, but that would require a really expensive license of Adobe Content Server 4 which is not suitable for my little hobby project. But it seems that adobe content server and Adobe Digital Editions is the only option at the moment. Or are there open source alternatives?

    Read the article

  • Learning about security and finding exploits

    - by Jayraj
    First things first: I have absolutely no interest in learning how to crack systems for personal enrichment, hurting other people or doing anything remotely malicious. I understand the basis of many exploits (XSS, SQL injection, use after free etc.), though I've never performed any myself. I even have some idea about how to guard web applications from common exploits (like the aforementioned XSS and SQL injection) Reading this question about the Internet Explorer zero-day vulnerability from the Security SE piqued my curiosity and made me wonder: how did someone even find out about this exploit? What tools did they use? How did they know what to look for? I'm wary about visiting hacker dens online for fear of getting my own system infected (the Defcon stories make me paranoid). So what's a good, safe place to start learning?

    Read the article

  • Licensing approach for .NET library that might be used desktop / web-service / cloud environment

    - by Bobrovsky
    I am looking for advice how to architect licensing for a .NET library. I am not asking for tool/service recommendations or something like that. My library can be used in a regular desktop application, in an ASP.NET solution. And now Azure services come into play. Currently, for desktop applications the library checks if the application and company names from the version history are the same as the names the key was generated for. In other cases the library compares hardware IDs. Now there are problems: an Azure-enabled web-application can be run on different hardware each time (AFAIK) sometimes the hardware ID for the same hardware changes unexpectedly checking the hardware ID or version info might not be allowed in some circumstances (shared hosting for example) So, I am thinking about what approach I can take to architect a licensing scheme that: is friendly to customers (I do not try to fight piracy, but I do want to warn the customer if he uses the library on more servers than he paid for) can be used when there is no internet connection can be used on shared hosting What would you recommend?

    Read the article

  • Silverlight Has a Bright Future

    The first positive sign for the platform's developers is the fact that Microsoft made Silverlight 5's release candidate stage available at the beginning of this month. This points to an official final release in the near future. While Silverlight 5's upcoming release is good news, Microsoft put a damper on it when it recently debuted Windows 8 and noted that it would favor HTML5 instead. In addition, the company said that plugins would not be supported on Windows 8's Metro version of Internet Explorer, leaving plugins such as Silverlight and Adobe Flash out in the cold. Add in a strategy o...

    Read the article

  • LAN visibility and network sharing

    - by takeshin
    I have two Ubuntu machines with wifi network cards configured as DHCP interfaces. machine1: inet addr:192.168.168.105 Bcast:192.168.168.255 Mask:255.255.255.0 Gateway: 192.168.168.252 machine2: inet addr:192.168.168.104 Bcast:192.168.168.255 Mask:255.255.255.0 Gateway: 192.168.168.252 They are connected to the router: inet addr: 192.168.168.252 Internet connection from the router is accessible on both of the machines. How to share files between those two? I have already tried few ways (eg. samba), but it looks like the machines are not visible to each other: @machine1$: ping 192.168.168.104 From 192.168.168.105 icmp_seq=1 Destination Host Unreachable What do I need to configure? Apparmor? Firewall?

    Read the article

  • How can I host a website on a dynamically-assigned IP address?

    - by nick
    I recently upgraded my internet to the point that it is much faster and more reliable than my current webhost. I would like to move my current domain to be hosted at home, but my IP address is dynamic. As far as I know, I only get a new IP when I restart my modem and or router (which is almost never) or when cable one (my ISP) pushes out a firmware update (rarely). There are a few ways I can see doing this: Convince my ISP to give me a static IP Assign my router my current IP to force a static IP (which might work?) Set my DNS record to my current IP address and update it on the rare occasions that it changes. Obviously I'm hoping that the first one works, but I don't want to pay a lot of extra money (if that's what it takes) to get a static IP address. Which of these options will work most reliably?

    Read the article

  • Survival rate of open source projects

    - by Shogoot
    I'm trying to write a paper on why or why not an open source project will have good odds for survival or not. I've found very few articles on the Internet on the topic or I'm just searching with the wrong terms. I've tried: "open source" survival "open source" success failure "open source" determinants for success So far i've only found this, which says some on the topic. So I turn to you my dear stackers! Help me find some arguments and articles that will throw some clarity on the subject.

    Read the article

  • Insanity joining Server 12.04 to Windows AD

    - by Andrew Stebenne
    I've seen lots of stuff about this error on the internet, but had no luck implementing a solution. I'm trying to use Likewise-Open to join an Ubuntu Server 12.04 machine to a Windows Active Dicrectory domain controller and having absolutely no luck. I run: sudo domainjoin-cli join my.domain.gob ADMINISTRATOR and get back (after putting in the admin password): Error: DNS_ERROR_BAD_PACKET [code 0x0000251e] A bad packet was recieved from a DNS server. Potentially the requested address does not exist. I have heard that WAD insists on being your only nameserver, and in attempts to make that work, I've tried editing /etc/resolv.conf and /etc/network/interfaces but none of that has worked either. Ideas?

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >