Search Results

Search found 34580 results on 1384 pages for 'technology is good'.

Page 22/1384 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • What is good side of PageRank?

    - by SharkTheDark
    I am doing research about backlinks/PR/SEO/Search Result position and all I read about is that PageRank is not important, that it worth before but now it's not important at all. Only thing I found useful about it that is "change Search Result position", but ONLY if there are two sites with same keywords and same text content value, then Search Engine will check which site has higher PR and place that site above lowest one. Google counts PR importance as 20% for displaying search rankings, and Yahoo! is like 3%... Correct me if I am wrong... Is there any other good thing from it?

    Read the article

  • Leading a not-so-good team

    - by vinoth
    How would you manage if you are allocated a team of 5 with, say, 4 incompetent programmers and you are asked to lead? Obviously you can't code for the 4 guys (you can, but that is not a good idea. At least I burned out doing that). Have you come across these kind of situations? Edit: I think I sounded rude by choosing a wrong word (incompetent) to address my problem. To rephrase the question, how do you deal with people who do not complete assigned tasks (for whatever reasons [ranging from incompetence to 'I don't care' stuff])?

    Read the article

  • Is Perforce as good at merging as DVCSs?

    - by dukeofgaming
    I've heard that Perforce is very good at merging, I'm guessing this has to do with that it tracks changes in the form of changelists where you can add differences across several files in a single blow. I think this implies Perforce gathers more metadata and therefore has more information to do smarter merging (at least smarter than Subversion, being Perforce centralized). Since this is similar to how Mercurial and Git handle changes (I know DVCSs track content rather than files), I was wondering if somebody knew what were the subtle differences that makes Perforce better or worse than a DVCS like Mercurial or Git.

    Read the article

  • Are python's cryptographic modules good enough?

    - by Aerovistae
    I mean, say you were writing professional grade software that would involve sensitive client information. (Take this in the context of me being an amateur programmer.) Would you use hlib and hmac? Are they good enough to secure data? Or would you write something fancier by hand? Edit: In context of those libraries containing more or less the best hashing algorithms in the world, I guess it's silly to ask if you'd "write something fancier." What I'm really asking here is whether it's enough on its own.

    Read the article

  • What are good gui guidelines for standard actions (usability)

    - by Michael Durrant
    For example: Delete's should have confirms. Confirmations should be green. Prefer list-of-values over free text whenever possible. This was just a sample. I am looking for references that simply and clearly list common 'should do's' in terms of ui, interactions and usability. My company is new to software development and they keep getting suprised by contractors that don't do the obvious so I am looking for good references about the right way to do it and the basic things to always consider (like the above). Obviously style is subjects, but things like delete confirms shouldn't be.

    Read the article

  • System response times --- A good Service Level Agreement?

    - by mpeterson
    In order to view system performance, I have been asked by management to give page response times for a few key pages. I want to make sure I am giving a good picture of the overall health of the system, and not just narrowing in on a single measurement. So my question is: When developing software, what metrics would you provide to your stakeholders to indicate a system that is healthy and running well? (if it is not running well, that should also be evident! Not trying to hide/obscure any problems.)

    Read the article

  • Is MochaHost as good as it sounds?

    - by gilly3
    It's time for me to find a new web host, and I'm a bit overwhelmed by the selection. I need a Windows host. One provider that seems to stand out is MochaHost. Here are a few of the things that look amazing to me: 2 free domains for life. 2!! MS SQL Support - Unlimited Databases 5,000 sites $3.33/month Unlimited everything (traffic, storage, domains, etc.) Is MochaHost too good to be true? Maybe it is notoriously unreliable (despite their 100% uptime guarantee)? Are there other considerations I may be forgetting?

    Read the article

  • Good resources for learning about graphics hardware

    - by Ken
    I'm looking for some good learning resources for graphics hardware (and associated low level software). Basically I want to learn more about what goes on underneath the opengl/direcx API layers in terms of how things are implemented. I familiar with what happens in principle during the various stages of the rendering pipeline (viewing, projection, clipping, rasterization etc). My goal is to be able to make better and more informed decisions about tradeoffs and potential optimisations when graphics/shader programming with respect to the following kinds of issues; batching view culling occlusions draw order avoiding state changes triangles vs pointsprites texture sampling etc Basically whatever the graphics programmer needs to know about modern graphics hardware in order to become more effective. I'm not really looking for specific optimisation techniques, rather I need more general knowledge so that I will naturally write more efficient code.

    Read the article

  • Results stored in a session - good idea?

    - by Nick
    To give a bit of background, lets say it's a generic results page, which is paginated so there are X results per page. Generally to do this, I have two queries on the page: to get the total number of results to get the results, limiting by the correct page's resultset However, recently I've been trying to cut down on the queries the site is making, and I thought one way to do this would be to only do the query if any parameters to the page have changed (except of course the page number)? This would then cache all the result id's in a session, which can be sliced when I need to return the correct resultset for that page. I was trying to look around the net to see if there are downsides of this method, but I've found very little information about it. Has anyone done this before? Is it a good idea?

    Read the article

  • Can you recommend a good test plan template?

    - by Ethel Evans
    Can you recommend a good test plan template for an agile testing team? I know there are templates for testing on the web and have already looked at some found by search engines, but I could really use something lightweight and something that has already been tried by skilled testers and is known to work well. Many templates I've seen give me the feeling that writing test documents is expected to be a third of the work that those testers are doing, but my team really prefers to use less documentation and more actual test writing. We use a wiki for documentation, so an approach that lends itself to living documents would be great. My hope is that using a more structured approach to test planning will increase the usefulness of my test plan while reducing the effort to create it by allowing me to think about the tests, and not the format and structure of the plan. My workplace does not have something already on hand, so whatever I start doing might be adopted by the company.

    Read the article

  • Developer career feeling like going back in time every new job [closed]

    - by komediant
    Is there a good category for this question? My background is bachelor in ICT and for a hobby I am programming already since I was around twelve I think. Started with QBasic, Pascal, C, Java et cetera. Currently I am working for about eight/nine years. Half academics/medical and half company world. A few years ago I started with frameworks and I began with Grails (underlying Spring/Hibernate), which was a heavenly job, very productive and no hassle. My previous job I developed in pure Spring/Hibernate Java, which was a bit more writing annotations and XML and no conventions like Grails. But still, I did like Spring/Hibernate a lot and the professional setup with a developmentstreet, versioning, Jenkins/Sonar, log4j and a good IDE like IntellIJ. It felt quite 'clear' and organised, although I knew Grails which felt a bit more productive. But...at my current job almost half the code is pure servlet, hard coded JDBC (connections handled by yourself), scriptlets in all JSP pages, no service layer, no versioning, no Maven, HTML in DAO-layer, JAR-hell, no hot swap deployment locally, every change you have to deploy and hope it works fine on the server. All local development needs ugly scriptlet tags to check which environment it is running. Et cetera. Now and then developers work over in the evening - I don't - and still lots of issues are not solved and new projects are waiting. I hear the developers complaining, but somehow they feel like what they have now is "advanced" or they are in a sort of comfore zone. The lead developer seems open for new things, but half of the times he says he can implement MVC-framework features himself instead of using what is already out there. So in short, I currently feel like I miss all the modern framework techniques and that the company is going so slow forward. I just work here for two months now. What I do now is also code some partially ugly stuff, but it goes in completely into my nature and I feel uncomfortable with it. Coding something takes long(er) than estimated and my manager complains about why it takes so long and I feel ashamed for myself needing so much time. Where I was used to just writing a query I now build up whole try catch methods. My manager knows my complaints and the developers do so too. There will come a meeting to line out plans for 2013 on technology and the issues I and the company are facing. I am not looking for another job yet, it's close to wehre I live and the economy is fragile. Does anyone else have had this kind of career, like feeling going backwards witch technology? And how did you cope with it?

    Read the article

  • Good interview programming projects

    - by bigtang
    I'm looking for some small programming projects that I can give potential employees to gauge their programming abilities. These will be programmers straight out of college. I'm looking for projects that would take someone a couple of hours and they would email back their answers post-interview. One example would be to take this paragraph of text and return a list of alphabetized unique words. After each word tell me how many times the word appeared and in what sentance(s) the word appreared in. Anyone have any good suggestions?

    Read the article

  • Which tags to use for good SEO on the page

    - by Aaditi Sharma
    I have a event page, where it has the following items. Event Name Venue Name(s) {some cases go upto 5 or more venues} Event Info {Genre(s),Language,type(s)} Date(s) on which the event is. Event Description. Since, the Event name is unique, and present in the title, I am assigning <H1> to it. However, venue names are multiple, plus the same venue may be repeated across the page, along with dates. (Each)Event Info, is used a single time on the page Dates, are descriped in a styled manner using multiple spans, however, I am going to use a title on them. Event description is in <p> tag. So My question is which heading tags to use for a good symentic description and SEO. Also the title on the dates, which format should I keep the date in? (dd/mm/yyyy)?

    Read the article

  • How do I know that I'm good at JavaScript

    - by lKashef
    I'm an ASP.NET developer and I won't get any job because of my JavaScript skills. I started reading about JavaScript in articles and tutorials but I still didn't pick a book to read. But what I'm trying to understand. For example If you want to test your ASP.NET skills you firstly start learning the basics from a book, course, etc. And to increase my knowledge and experience, I would build a website of any given idea and start to face troubles and learn as I go. but what can I do with JavaScript! .. how am I supposed to know how good I am at it !? First Things First: I'm sorry guys, I've been facing some troubles to Comment or UpVote on the website but It's finally over, so Thanks everybody for your help =)

    Read the article

  • Good interview programming projects

    - by bigtang
    I'm looking for some small programming projects that I can give potential employees to gauge their programming abilities. These will be programmers straight out of college. I'm looking for projects that would take someone a couple of hours and they would email back their answers post-interview. One example would be to take this paragraph of text and return a list of alphabetized unique words. After each word tell me how many times the word appeared and in what sentance(s) the word appreared in. Anyone have any good suggestions?

    Read the article

  • Any good site that teaches C++?

    - by Shinmaru
    I am searching for any good site that teaches C++, that can explain most to all things about it(general) and has a decent active community. About Me: I am new to programming(knows nothing of it, so please bare with me), I have only learned(to a very basic form) LUA scripting Language, so yes I am your complete newbie. I got interested in programming, from scripting in LUA, so you can say it was my small stepping stone, One would basically take a course in college, but not everyone is well funded for that and I can't buy books for the same said reason(yes I'm somewhat poor, only money for essentials and bills), I'm not trying to get sympathy, just stating my conditions. and Please, something in between the lines of 'Programming for Dummies'(I'm not the brightest crayon in the box), I know this will not be easy, but your help will be most appreciated. I would learn for either Windows and/or Linux/Unix. I use both. Site(s) I know: Cplusplus.org(quite inactive and tutorials are a little for the programming savvy.

    Read the article

  • Top 5 Places to Get Good Quality Links & Boost Your Search Rank

    If you're looking to get a higher ranking for your website, the bottom line is that you need to get good quality links. Gone are the days when you could just rely on keywords on your site to get you to the top... or even getting 1,000's of un-targeted links to blast your way to the #1 spot on Google. Now, it's all about getting high quality links that will make Google think your site is "worthy" enough to put at the top of the results... and here's where to get those links.

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • Technology behind twilio

    - by John Stewart
    I wanted to discuss the technology behind Twilio. I have been playing around with the service for a few days now and it is simply mind-blowing. While I don't have a direct need for it right now, I am curious to find the back-end of the technology. So can anyone shed some thoughts on how does Twilio do its magic?

    Read the article

  • Are you a good or bad programmer?

    - by Eli
    Hi All, I see a lot of questions on SO that are asked about 'good' programmers vs 'bad' programmers. For example, what is a good/bad programmer, how to tell a good/bad programmer, what to do about a bad programmer on a team, how to hire a good programmer. I know it's pretty easy to apply the words to other people, but I find myself wondering if anyone out there would actually define THEMSELVES in a Boolean fashion like this, rather than "good in some areas, weak in others..." I'm not asking as an either/or where you have to be one or the other, but as a 'both' - are you a good or bad programmer? If so (either one), why? Please note this isn't meant to be argumentative, or to define good/bad practices, etc. I just want to know how many people think they are good, bad, or neither out there.

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >