Search Results

Search found 25408 results on 1017 pages for 'back'.

Page 120/1017 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Stop yahoo from opeing in second tab when opening chrome

    - by sam
    I recently downloaded Vuze (torrent client), which as well as installing, installs a whole bunch of rubbish toolbars (mostly by an outfit called 'Spigot') and tries to change your default search engine to Yahoo. In Safari and Firefox, ive managed to change it back but i Chrome although ive deleted all the addons and restored my default search engine when ever i open up Chrome i get two tabs 1) my default homepage and 2) yahoo search page. Any idea were i can disable this Yahoo search page, ive looked in the settings but could find any where obvious (ive set back my default search engine to Google and deleted any add ons) but this still dosnt do it.. any ideas ? My last resort is to do a factory reset of Chrome, which i dont want to do as i have all my settings in there.

    Read the article

  • Restoring factory image of HP Laptop

    - by Ahmed
    I use a HP G62. I am unable to use my recovery disk to restore to factory settings. I had no problems earlier until I created an additional partition which I hear might have changed my hard disk to dynamic. How do I get back to 'normal'? I don't mind formatting the disk. I just want my factory OS back. .............. i get an error message at about 69 percent telling me that the restoration process failed. I'm using the factory image disks i created using the hp recovery manager. I have tried formating the hard drive clean and then restoring woth the cd, it didn't work. I was always able to restore before i created that partition.

    Read the article

  • Coders For Charities

    - by Robz / Fervent Coder
    Last weekend I had the opportunity to give back to the community doing what I love. As geeks we don’t usually have this opportunity. The event is called Coders 4 Charities (C4C) and it’s a grueling weekend of coding for nearly 30 hours over the weekend. When you finish you get to present to the charity and all of the other groups what you have completed. From the site: Coders For Charities is a 3-day charity event that pairs charities and local software developers. Charities often do not have the funds to implement a new website or intranet or database solution. Software developers often do not volunteer for charities because their skills do not apply. This event is the perfect marriage of these two needs; software developers volunteering their time to help charities better serve their community though the latest technology! The actual event was lined with multiple charities and about 50 developers, designers, business analysts, etc, each working with a different charity to come up with a solution that they could implement in less than 3 days. C4C provided a place and food for us so that we wouldn’t have to leave much during the time we had to implement our solution. They also provided games like Rock Band so we could get away and clear our minds for a few moments if necessary. I don’t think we made it down there to play, but the food and drinks were a huge help for us. The charity we we picked was Harvest Home. They had a need for an online intranet site where they could track membership and gardening. Over the next few days we worked on a site we could give them. Below is a screen shot with private data marked out. It was an awesome and humbling experience to be able to give back to a charity and I’m happy I was a part of it. I would definitely do it again. How often do we get to use our abilities to volunteer our time to a charity?

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • How do I restore my system from a "Backup and Restore Center" backup?

    - by Daniel R Hicks
    The Windows (Vista) documentation and available online info is comprehensively vague. If I have a moderately brain dead system and want to restore it, and I have a "Backup and Restore Center" backup whose "delta" is not quite a week old (but with a "full backup" behind it), what steps do I go through to recover my box back to that backup point? It's totally unclear whether simply doing "restore all" from the (advanced) "Center" is sufficient, or do I need to first take the box back to day zero with the system restore DVD, et al? (Just editing this to get my correct ID associated with it.)

    Read the article

  • SQLAuthority News – 5 days of SQL Server Reporting Service (SSRS) Summary

    - by Pinal Dave
    Earlier this week, I wrote five days series on SQL Server Reporting Service. The series is based on the book Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from thewww.Joes2Pros.com web site. I just completed reading the book – it is a fantastic book and I am loving every bit of it. I new SSRS and I also knew how it is working however, I did not know was fine details of how I can get maximum out of the SSRS subject. This book has personally enabled me with the knowledge that I was missing in my knowledge back. Here is the question back to you – how many of you are working with SSRS and when you have a question you are left with no help online. There are not enough blogs or books available on this subject. The way Kathi has written this book is that it attempts to solve your day to day problem and make you think how you can take your daily problem and take it to the next level. Here is the article series which I have written on this subject and available to read: SQL SERVER – What is SSRS and Why SSRS is asked for in many Job Opening? Determine if SSRS 2012 is Installed on your SQL Server Installing SQL Server Data Tools and SSRS Create a Very First Report with the Report Wizard How to an Add Identity Column to Table in SQL Server Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Service, SSRS

    Read the article

  • links for 2011-02-28

    - by Bob Rhubart
    Apache Tuscany : SCA Java 2.x Releases (tags: ping.fm) Richard Veryard on Architecture: Modernism and Enterprise Architecture "Underlying conventional enterprise architecture theory and practice are some implicit assumptions that could be loosely characterized as modernist. Several people are offering more or less radical departures from conventional enterprise architecture..." - Richard Veryard (tags: ping.fm entarch) Java / Oracle SOA blog: Building an asynchronous web service with OSB "A few weeks ago I made a blogpost over how you can build an asynchronous web service with JAX-WS. In this blogpost I will do the same in the Oracle Service Bus." - Oracle ACE Edwin Biemond (tags: oracle otn oracleace servicebus esb osb webservices soa) Enterprise Software Development with Java: GlassFish 3.1 arrived! Yes sir, we do cluster now! "GlassFish 3.1 is finally there. As promised by Oracle back in March last year! And it is an exciting release. It brings back all the clustering and high availability support we were missing since 2.x into the Java EE 6 world." - Oracle ACE Director Markus Eisele (tags: oracle otn oracleace glassfish)

    Read the article

  • Silverlight Cream for April 21, 2010 -- #843

    - by Dave Campbell
    In this Issue: Alan Beasley, Roboblob, SilverLaw, Mike Snow, and Chris Koenig. Shoutouts: Ozymandias has a discussion up: The Three Pillars of Xbox Live on Windows Phone John Papa announced that Silverlight 4 is now on WebPI: Get Silverlight 4 – Simplified! Dan Wahlin posted the code and material from DevConnections: Code from my DevConnections Talks and Workshop Tim Heuer has a good deal posted from GoDaddy: Get a Silverlight XAP signing certificate for cheap thanks to GoDaddy From SilverlightCream.com: ListBox Styling (Part2-ControlTemplate) in Expression Blend & Silverlight Alan Beasley is back with part 2 of his ListBox styling tutorial adventure in Expression Blend... this looks like some of the stuff I was getting close to in Win32 a bunch of years back... great stuff... thanks Alan! Unit Testing Modal Dialogs in MVVM and Silverlight 4 Roboblob responds to some feedback with an expansion on his previous post with the addition of some Unit Testing. ChildWindowResizeBehavior - Silverlight 4 Blend 4 RC design time support SilverLaw has a short post about a behavior he has available at the Expression Gallery that resizes a child window with the Mouse Wheel, and also has Design-time support in Blend. Tip of the Day #111 – How to Configure your Silverlight App to run in Elevated Trust Mode Mike Snow has his latest tip up, and this one is on both ends of of the Elevated Trust Mode of OOB ... how to set it, and what your user experience is like. WP7 Part 2 – Working with Data Chris Koenig has part 2 of his WP7 exploration up ... he's tackling Nerd Dinner and pulling down Odata. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Duplicate incoming TCP traffic on Debian Squeeze

    - by Erwan Queffélec
    I have to test a homebrew server that accepts a lot of incoming TCP traffic on a single port. The protocol is homebrew as well. For testing purposes, I'd like to send this traffic both : - to the production server (say, listening on port 12345) - to the test server (say, listening on port 23456) My clients apps are "dumb" : they never read data back, and the server never replies anyway, my server only accepts connections, and do statistical computations and store/forward/service both raw and computed data. Actually, client apps and hardware are so simple there is no way I can tell clients to send their stream on both servers... And using "fake" clients is not good enough. What could be the simplest solution ? I can of course write an intermediary app that just copy incoming data and send it back to the testing server, pretending to be the client. I have a single server running Squeeze and have total control over it. Thanks in advance for your replies.

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • How to PERMANENTLY disable touchpad tap-to-click on Dell Inspiron/Windows 7

    - by Graham
    Hi all - my first time here, so I hope you can help. I've seen a lot of stuff on various forums (including here) about disabling the annoying "tap" function on a laptop touchpad. I learned the hard way not to de-install the driver (as the software suggests), since you then loose the Synaptic tab in the mouse control settings, and with it all means to modify the touchpad settings ... indicentally, if this happens to you, reboot in safe mode and do a restore, and the synaptic tab comes back. Not ideal, I know, but it works. Anyway, I have the most up-to-date drivers, and I can go to the Synaptics tab and can disable the tap-to-click function no problem. However, next time the machine is booted, tap-to-click is back on. It can alsways be disabled, but it's pain having to reset it every time the mchine is powered up. Is there a way to permanently disable it, once and for all? Thanks in advance, Graham

    Read the article

  • Dealing With Table Borders In OOXML

    - by Tim Murphy
    Note: Cross posted from Coding The Document. Permalink Formatting tables in a document programmatically can be a very complex task.  This is the major reason which we start our document generation projects with templates instead of building components in a document by hand. Borders are on aspect of a table that you may want to fomat.  Borders are used to make certain content in a table stand out.  If you need to conditionally set and remove borders there is something that you need to be aware of.  Even in OOXML you have the concepts of styles, inheriting styles and overriding styles. When Word defines a table it will reference a global style such as “TableGrid”.  This style will include the borders for the table.  Specifically the InsideHorizontalBorder and InsideVerticalBorder define the borders for the cells.  These can be overridden by the TableCellBorders collection of a particular cell.  Adding a double right border on a cell is as easy as the couple of lines of code below. wordprocessing.TableCellBorders borders = new wordprocessing.TableCellBorders(); borders.RightBorder = new RightBorder(){Val = BorderValues.Double, Color = "000000", ThemeColor = ThemeColorValues.Text1, Size = (UInt32Value)4U, Space = (UInt32Value)0U }; cell.TableCellProperties.Append(borders); If I want to revert back to the table’s style for cell borders I simply need to remove all children from the TableCellBorders collection.  It is like removing a class identifier from a TD tag in HTML.  The style in the parent object takes back over. With the knowledge of how the borders work you can take the concept and apply it to other effects of styles. del.icio.us Tags: OOXML,Office Open XML,Microsoft Office 2007,Microsoft Word 2007,table,style,border

    Read the article

  • Fixing Chrome&rsquo;s AJAX Request Caching Bug

    - by Steve Wilkes
    I recently had to make a set of web pages restore their state when the user arrived on them after clicking the browser’s back button. The pages in question had various content loaded in response to user actions, which meant I had to manually get them back into a valid state after the page loaded. I got hold of the page’s data in a JavaScript ViewModel using a JQuery ajax call, then iterated over the properties, filling in the fields as I went. I built in the ability to describe dependencies between inputs to make sure fields were filled in in the correct order and at the correct time, and that all worked nicely. To make sure the browser didn’t cache the AJAX call results I used the JQuery’s cache: false option, and ASP.NET MVC’s OutputCache attribute for good measure. That all worked perfectly… except in Chrome. Chrome insisted on retrieving the data from its cache. cache: false adds a random query string parameter to make the browser think it’s a unique request – it made no difference. I made the AJAX call a POST – it made no difference. Eventually what I had to do was add a random token to the URL (not the query string) and use MVC routing to deliver the request to the correct action. The project had a single Controller for all AJAX requests, so this route: routes.MapRoute( name: "NonCachedAjaxActions", url: "AjaxCalls/{cacheDisablingToken}/{action}", defaults: new { controller = "AjaxCalls" }, constraints: new { cacheDisablingToken = "[0-9]+" }); …and this amendment to the ajax call: function loadPageData(url) { // Insert a timestamp before the URL's action segment: var indexOfFinalUrlSeparator = url.lastIndexOf("/"); var uniqueUrl = url.substring(0, indexOfFinalUrlSeparator) + new Date().getTime() + "/" + url.substring(indexOfFinalUrlSeparator); // Call the now-unique action URL: $.ajax(uniqueUrl, { cache: false, success: completePageDataLoad }); } …did the trick.

    Read the article

  • AppUpdater and wamp server

    - by Gerbrand
    I've got an winforms application and I want to implement auto updater to this application. I followed the instructions on the site and got everything setup right. (appupdater application) Now I tried a test and I'm getting the following error back from the appupdater: the remote server returned an error 405 I googled this and this is because my server isn't setup with the right access. I'm using a wampserver and in the apache httpd.conf I added the following lines so my directory is allowed for access: <Directory "c:/wamp/www/updater/V11/"> Options Indexes FollowSymLinks AllowOverride all # onlineoffline tag - don't remove Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> But I'm still getting the same error back. I can find information for the IIS configurations, but not for apache. edit: I'm still getting the error, I opened the error log file of apache and I see the following - "PROPFIND /updater/V11/ HTTP/1.1" 405 238 the updater component is using the HTTP-DAV. edit2: it seems that nobody had this kind of situation.

    Read the article

  • FileOpenPicker/FileSavePicker doesn't allow *.* wildcard file associations

    - by mbrit
    On Twitter, Matthias Jauernig commented that the FileOpenPicker and FileSavePicker doesn't allow *.* wildcard file associations. I was relaxed about this and wrote back that it was related to sandboxing implying it was a "good thing", however as Matthias commented back, perhaps it's not.In Metro-style the sandboxing works that if something gives you a file (e.g. the picker, or a share operation), you can access it regardless of where on the system. If you find the file yourself, you have to declare the type.The reason why I think it's related to sandboxing is because if you work with files programmatically you have to be explicit about the file types. This is to stop malware that you think is only interested in - say .PDF files, scanning and uploading any .EML files that it can find on the machine. It follows then on the pickers that restriction would continue. It allow's the retail store team to validate that an app is likely to behave itself. If it's an app that works with images, locking down the picker so that it can only access image file types makes sense.However Matthias mentioned that he has an app that should allow files of any arbitrary file. That fits more into the "if the user selects it, it must be OK" camp than the "programmatic scanning" camp. So now I'm left wondering why the picker doesn't allow any type to be selected.I think then maybe the decision comes down to simplicity. A lot of the decisions in Metro-style design relate to ideas about "zero intimidation". Allow the user to select any file is too much like Old Windows, and not enough like Reimagined Windows. What happens in Matthias's app if the user selects Explorer.exe as the file he or she wants to work with? I guess it's fine if you expect your user to know what they're doing (Old Windows), but not so fine if you're expecting a three year old to work with it (Reimagined Windows).

    Read the article

  • Am I correctly handling duplicate URLs for my homepage?

    - by Rob Goldstein
    I own a Job Search site named www.conservationjobboard.com and have a concern about how the domain is viewed by search engines. The issue is that when the site was first designed, the default page was left as default.php, but the homepage was actually JobBoard.php. To handle this, the default.php page performed a redirect to the JobBoard.php file when www.conservationjobboard.com/ was requested. The main problem resulted because the redirect was a temporary redirect causing search engines to index conservationjobboard.com/ and conservationjobboard.com/JobBoard.php as 2 separate pages. This has since been corrected to use the .htaccess file so that JobBoard.php is now the default file for the root directory eliminating the need for the redirect. Problem is that search engines still show both URL's in search results (one including JobBoard.php and one that ends with /). Another potential problem is that some of my early backlinks are to conservationjobboard.com/JobBoard.php while the rest are to conservationjobboard.com The 2 outstanding questions are as follows: 1. Is my domain still being penalized by search engines like Google for having duplicate homepage URL's? 2. Are all of the back links to my homepage being considered as the same now or is the total number of back links being split between the 2 different URL's? If you think there are still issues with how we have this set-up, I was wondering if you could give me advice on what we should do differently. Thanks.

    Read the article

  • SQL SERVER – Select Columns from Stored Procedure Resultset

    - by Pinal Dave
    It is fun to go back to basics often. Here is the one classic question: “How to select columns from Stored Procedure Resultset?” Though Stored Procedure has been introduced many years ago, the question about retrieving columns from Stored Procedure is still very popular with beginners. Let us see the solution in quick steps. First we will create a sample stored procedure. CREATE PROCEDURE SampleSP AS SELECT 1 AS Col1, 2 AS Col2 UNION SELECT 11, 22 GO Now we will create a table where we will temporarily store the result set of stored procedures. We will be using INSERT INTO and EXEC command to retrieve the values and insert into temporary table. CREATE TABLE #TempTable (Col1 INT, Col2 INT) GO INSERT INTO #TempTable EXEC SampleSP GO Next we will retrieve our data from stored procedure. SELECT * FROM #TempTable GO Finally we will clean up all the objects which we have created. DROP TABLE #TempTable DROP PROCEDURE SampleSP GO Let me know if you want me to share such back to basic tips. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL

    Read the article

  • Win 7 accessing large files uses 100% RAM

    - by user181276
    Running Win 7 64-bit SP1 with 8 GB RAM. I first noticed this problem when using the GUI to copy some large (5+ GB) files from one disk to another. What happens is the physical memory in use rises quite quickly to 100% and the system comes to a crawl. If I just start to access the file in a media player (it is a movie) the memory usage climbs up slowly but eventually reaches 100%. When copying the same files via XCOPY I do not have this problem. Using RAMMAP I see most of the memory usage is under "Mapped File" and is allocated under the "Active" column. If I select "Empty System Working Set" the RAM usage drops back down but then starts to climb back up. Any ideas on what I can check/test to eliminate this issue?

    Read the article

  • SQL Import to update existing records?

    - by Kenundrum
    I've got a table in a database that contains costs for items that gets updated monthly. To update these costs, we have someone export the table, do some magic in excel, and then import the table back to the database. We're running MSSQL 2005 and using the built in SQL Management Studio. The problem is that when importing back into the table, we have to delete all the records before we import or else we'll get errors. The ideal situation would be for the import to recognize the primary keys and then update the records instead of trying to create a second record with a duplicate key- halting the import. The best illustration of the behavior we're trying to get can be found at http://sqlmanager.net/en/products/mssql/dataimport/documentation/hs2180.html the update or insert example. Is something like this possible with the built in tools or do we have to get third party software to make it happen?

    Read the article

  • Configuring a Genius GW-7200 Access Point

    - by alex
    I came across an access point we had a few years ago. I'm now trying to get it set up to work on our network. here are a couple of pictures: http://twitpic.com/194u06/full http://twitpic.com/194u0v/full I have plugged this into our network via the network port on the back. I set up a DHCP reservation, based on the MAC address on the back of the access point, however I cannot ping it, nor access the web interface. I've held down the reset button for 10 seconds, to see if that would do anything. Google doesn't come up with anything on the matter :-(

    Read the article

  • Using ASP.NET C# and Javascript

    - by ctck
    I'm looking for the most efficient / standardized way of passing data between client javascript code and C# code behind in an ASP.NET application. Currently ive been using the following methods to achieve this but they all feel a bit like a fudge. The way i pass data from javascript to the C# code behind is by setting hidden asp variables and triggering a postback <asp:HiddenField ID="RandomList" runat="server" /> function SetDataField(data) { document.getElementById('<%=RandomList.ClientID%>').value = data; } Then in C# code i collect the list protected void GetData(object sender, EventArgs e) { var _list = RandomList.value; } Going back the other way i often use either scriptmanager to register a function and pass it data during Page_Load: ScriptManager.RegisterStartupScript(this.GetType(), "Set","get("Test();",true); or i add attributes to controls before a post back or during Initialization / pre rendering stages: Btn.Attributes.Add("onclick", "DisplayMessage("Hello");"); These methods have served me well and do the job. However they just dont feel complete. Is there a more standardized way of passing data between client side markup / javascript and backend code. Ive seen some posts like this one: Injecting JavaScrip : StackOverflow that describe HtmlElement class. Is this something is should look into? Thanks everyone for your time.

    Read the article

  • SQL Azure Service Issues &ndash; 10.27.2012 (Restored Now)

    - by ToStringTheory
    Please note that if you have a Windows Azure website, or use SQL Azure, your site may be experiencing downtime currently.  Notice I just called in regarding one of my public facing internet sites, because the site was failing to load anything but its error page, I couldn’t connect to the database to inspect application error logs, and the Windows Azure Management portal won’t load the SQL Azure extension. After speaking to the representative, he also mentioned that they were also having some problems updating the Service Dashboard which shows service up/down time, and for now, they are posting messages at http://account.windowsazure.com.  Please note that this issue may only be effecting certain regions.  Last, I may have misheard the representative, but he said that the outage was being categorized as a level 8, and if I heard correctly, I think he said that level 8 was the worst level.  I can’t say for sure on this though, because the phone connection to their support number was bad – large amounts of white noise. Good Luck! Update It appears that this outage may also be effecting the following services: SQL Database, Service Bus, Datamarket, Windows Azure Marketplace, Shared Caching, Access Control 2.0, and SQL Reporting. The note on the account page says for the South Central US region, however, I believe the representative I spoke to also mentioned North Central. As I said before though, the connection was bad. Update 2 My site regained connectivity about an hour ago, and it appears that the service dashboard is back in operation with correct status and history. It does appear that I misheard on the phone regarding multiple regions, so chances are this only effected a percentage of the platform. All in all, if this WAS their worst level of a problem, they really got it fixed and back up pretty fast. All in all, I understand that it is inherent for a complex system such as Azure to have ups and downs, but at the end of the day, I am still happy to support Azure to its fullest!

    Read the article

  • Preinstalled Windows 8 and Linux UEFI dual boot on a laptop

    - by itchy355
    I am trying to set up Windows 8 and Arch Linux on a new Sony Vaio E14 with preinstalled windows 8. So far: installed W8 to my new SSD (switched for the original HDD) using Recovery Media shrunk the W8 partition, deleted recovery partition, disabled swap confirmed W8 booting just fine On to Arch: disabled Secure Boot in bios confirmed W8 booting just fine Booted Arch off the CD and installed everything to 4th and 5th partition set up rEFInd for EFIstub kernel bootloader After that it got worse. I was unable to boot anything else than Windows 8 (although I was glad that they at least kept working just fine). Tried: creating EFI\refind\ and putting the .efi there (as per Arch manual overwriting EFI\boot\bootx64.efi overwriting EFI\Microsoft\Boot\bootmgr.efi overwriting EFI\Microsoft\Boot\bootmgfw.efi --- YAY rEFInd shown up! So far, so good. I've kept the whole W8 Boot\ directory in EFI\windows8 and set up a boot menuentry for it; and it booted just fine. But, upon restart, everything was wrong -- 'Operating system not found' instead of any bootloader (refind or w8). Booted back into Arch using the live CD to find out that the EFI partition had erroneous FAT table. fsck.vfat fixed it, and I've found that EFI\Microsoft\Boot was back to it's original state (all refind files deleted and replaced with W8 bootloaders). I've overwritten them again and got back to rEFInd showing up correctly and Arch being perfectly bootable. After that I've tried only renaming EFI\Microsoft\Boot\bootmgfw.efi to bootmgfw.001.efi (then copying refind's .efi to bootmgfw.efi and keeping EVERY OTHER file as it was), but with exactly the same result. Tried marking the GPT EFI partition as read-only, same result. Now I'm kinda out of luck. Arch boots fine, so does W8 but it destroys the EFI partition in the process. Thanks for any ideas, Googling brought me this far and I can't find any better. PS -- windows 8 MAYBE destroys the partition upon shutdown -- when I order a shutdown in W8, it takes unusually long (about half a minute instead of ~5 seconds). So in theory I could solve this by hard-resetting the laptop instead of a normal shutdown, but that's just not nice.

    Read the article

  • Formatted mac external hard drive loaded on pc

    - by kjokay
    I have an issue where it appears that an external hard drive which had been formatted on a mac system was loaded as a drive in Windows. Windows is obviously unable to read the data and now the drive won't mount in the mac. It appears that Windows overwrote something concerning the drive's information on what filesystems and types it has on it. Mac diskutility is unable to repair the drive and the partition is showing up in the utility as a FAT32. Using an applexsoft utility, I am able to verify the data is still on this drive, but I'd rather not spend $100 to save these files (its not my hard drive anyways). Is there a way I can use some UNIX commands to find out the partition information on the drive, back the raw data up on it, then restore the data back onto the drive after re-formatting it again?

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >