Search Results

Search found 18245 results on 730 pages for 'recursive query'.

Page 525/730 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • CodePlex Daily Summary for Sunday, December 26, 2010

    CodePlex Daily Summary for Sunday, December 26, 2010Popular ReleasesNoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.Temporary Data Storage Folder: TDS Folder source code: This is latest version 0.2 Beta code. To know the password request at neutron.request@gmail.comNHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????Windows Workflow Foundation on Codeplex: Microsoft.Activities.UnitTesting 1.71: New in this release Episode as Tasks using the Task Parallel Library CHM Help file now included in release. StyleCop compliance is resulting in massive refatoring but should not cause breaking changes Breaking Change DefaultTimeout is now a TimeSpan Removed default parameters using int timeout = DefaultTimeout and added overloads insteadMiniTwitter: 1.64: MiniTwitter 1.64 ???? ?? 1.63 ??? URL ??????????????Ajax ASP.Net Forum: InSeCla Forum Software v0.1.9: *VERSION: 0.1.9* HAPPY CHRISTMAS FEATURES ADDED Added features customizabled per category level (Customize at ADMIN/Categories Tab) Allow Anonymous Threads, Allow Anonymous Post Virtual URLs (friendly urls) has finally added And you can have some forum (category) using virtual urls and other using normal urls. Check !, as this improve the SEO indexing results Moderation Instant On: Delete Thread, Move Thread Available to users being members of moderators or administrators InstantO...VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...SSH.NET Library: 2010.12.23: This release includes some bug fixes and few new fetures. Fixes Allow to retrieve big directory structures ssh-dss algorithm is fixed Populate sftp file attributes New Features Support for passhrase when private key is used Support added for diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256 and diffie-hellman-group-exchange-sha1 key exchange algorithms Allow to provide multiple key files for authentication Add support for "keyboard-interactive" authentication method...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...SuperSocket, an extensible socket application framework: SuperSocket 1.3 beta 2: Compared with SuperSocket 1.3 beta 1, the changes listed below have been done in SuperSocket 1.3 beta 2: added supports for .NET 3.5 replaced Logging Application Block of EntLib with Log4Net improved the code about logging fixed a bug in QuickStart sample project added IPv6 supportTibiaPinger: TibiaPinger v1.0: TibiaPinger v1.0Media Companion: Media Companion 3.400: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. A manual is included to get you startedMulticore Task Framework: MTF 1.0.1: Release 1.0.1 of Multicore Task Framework.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 7: 1. added script save/load in user query window 2. fixed problem with connection dialog when choosing windows auth but still ask for user name 3. auto open user table when double click one table node 4. improved alert message, added log only methodEnhSim: EnhSim 2.2.6 ALPHA: 2.2.6 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Fixing up some r...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4.3: Helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: Improvements for confirm, popup, popup form RenderView controller extension the user experience for crud in live demo has been substantially improved + added search all the features are shown in the live demoGanttPlanner: GanttPlanner V1.0: GanttPlanner V1.0 include GanttPlanner.dll and also a Demo application.N2 CMS: 2.1 release candidate 3: * Web platform installer support available N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) A bunch of bugs were fixed File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the templ...New ProjectsA.I. Semantic Net Tests: A test using "semantic net" artificial intelligence developed in C++.Awesome.ClientID: Awesome.ClientID is a HTTPModule which serializes the ClientID of server controls, and page/usercontrol properties to a Json to make working with JavaScript easier without the need for outputting ClientID's. It's a solution for clean ClientIDs for .NET 2.0/3.5BitTweet: BitTweet is a fast, simple and easy to use Twitter client with tweeting, reading and writing functions included. It is know for being the most secure and fast Twitter client out there! It is written in Visual Basic .NET 2008. ~Tweet in a bit!CarRental001: carCrudo: CRUDO - The MCG (Model-Controller-Generator) CGF (Code Generation Framework) Visit The Project HomePage: http://adityayadav.com/CRUDO_The_MCG_Model_Controller_Generator_CGF_Code_Generation_Framework.aspx Licenses: 1) GPL v2 2) Commercial (contact us for information)Effeks: FX trading sample application sing prismEnhanced WebPart: Enhanced Webpart is a webpart with enhanced properties editor. It allows you display properties of most common types, and provides easy way to add custom controls to display your own types in webpart properties editor. Enhanced Webpart fully supports localization.Esteffin's graphic: Esercitazioni grafica 2010farasun: Source code Repository for farasun.wordpress.comGCTF: Desafio .NET Realizado na FACISAImgshow Plugin for Windows Live Writer: By using Imgshow Query you are able to publish multimedia content such as Youtube, Facebook Video, PDF, and other Imgshow supported service.Instant Messenger Using WPF and WCF: instant messenger using wcf and wpfjMenu: Create a dotnetnuke module that cover all kind of jquery menu pluginsMcopy API: Copy all files and directores in windows shell. Support long path (less then 32000 chars) and network path (eg. \\server\share or \\127.0.0.1\share)MOSSWebServices: This project contains code for interacting with the 21 web services provided by MOSS. This is targeted towards WSS developers This currently contains code to interact with "UserGroups" web service. I will keep adding code to interact with other MOSS web services soon. Nest Hub - Twitter Client: Nest Hub is a free Twitter client for PC. It's developed in VB.NET.Neural Networks Library: Neural networks Library by SefnajParallelism: Parallelism- Unified Parallel & Concurrency Framework Visit the Project HomePage: http://adityayadav.com/Parallelism.aspx Licenses: 1) GPL v2 2) Commercial (contact us for information)PDF IRM Protector For SharePoint 2007 And 2010: The PDF IRM Protector controls the conversion of PDF documents to their encrypted, rights-managed format and the decryption of PDF documents from their rights-managed format back to their original format. The project targets both SharePoint 2007 and 2010. PHP Form Validator - Isis: Project Isis simplifies the process of form validation for PHP. Offering a robust flag and function system, allowing you to encode the forms with the correct validation data and be done.Recycle Me: Open Source Project is to help and educate people in regarding reuse and recycling of daily life items. This is a Social project so we need funds for research and surveys. Volunteers are most welcome to contribute and certifications for their participation are available. ThanksSencha ExtJS Import Library for Script#: Script# import library for Sencha Ext JS Javascript Framework Separating Vertices: Given a graph as a collection of edges and vertices, it identifies the separating vertices, biconnected components, and edges. SIG - SISTEMA DE GESTÃO INTERNA: Projeto feito para a empresa MODEMTELECOM, consultoria em telecomunicoções.Smart Card COM: Smart Card COM Library, useful to access smart cards from a Web page in Internet ExplorerSocial Hub: Social HubStock Research: Test project for stocke researchtaokeba: ????????????????????,???????????。Temporary Data Storage Folder: This software creates a folder that stores temporary data that you want to store for temporary tasks. Temporary tasks such as storing a file that is to be deleted after some time. It automatically deleted those files on exit. Learn more at www.neutronsoftwares.weebly.comTukUI Updater: A OpenSource Windows Updater tool for the World of Warcraft(R) addon TukUI Developed in .NET 3.5 Discusion thread are found at the official TukUI forums: http://www.tukui.org/v2/forums/topic.php?id=4982 WebUtil: This is a project for using all of features by the web.

    Read the article

  • DTracing a PHPUnit Test: Looking at Functional Programming

    - by cj
    Here's a quick example of using DTrace Dynamic Tracing to work out what a PHP code base does. I was reading the article Functional Programming in PHP by Patkos Csaba and wondering how efficient this stype of programming is. I thought this would be a good time to fire up DTrace and see what is going on. Since DTrace is "always available" even in production machines (once PHP is compiled with --enable-dtrace), this was easy to do. I have Oracle Linux with the UEK3 kernel and PHP 5.5 with DTrace static probes enabled, as described in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages I installed the Functional Programming sample code and Sebastian Bergmann's PHPUnit. Although PHPUnit is included in the Functional Programming example, I found it easier to separately download and use its phar file: cd ~/Desktop wget -O master.zip https://github.com/tutsplus/functional-programming-in-php/archive/master.zip wget https://phar.phpunit.de/phpunit.phar unzip master.zip I created a DTrace D script functree.d: #pragma D option quiet self int indent; BEGIN { topfunc = $1; } php$target:::function-entry /copyinstr(arg0) == topfunc/ { self->follow = 1; } php$target:::function-entry /self->follow/ { self->indent += 2; printf("%*s %s%s%s\n", self->indent, "->", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); } php$target:::function-return /self->follow/ { printf("%*s %s%s%s\n", self->indent, "<-", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); self->indent -= 2; } php$target:::function-return /copyinstr(arg0) == topfunc/ { self->follow = 0; } This prints a PHP script function call tree starting from a given PHP function name. This name is passed as a parameter to DTrace, and assigned to the variable topfunc when the DTrace script starts. With this D script, choose a PHP function that isn't recursive, or modify the script to set self->follow = 0 only when all calls to that function have unwound. From looking at the sample FunSets.php code and its PHPUnit test driver FunSetsTest.php, I settled on one test function to trace: function testUnionContainsAllElements() { ... } I invoked DTrace to trace function calls invoked by this test with # dtrace -s ./functree.d -c 'php phpunit.phar \ /home/cjones/Desktop/functional-programming-in-php-master/FunSets/Tests/FunSetsTest.php' \ '"testUnionContainsAllElements"' The core of this command is a call to PHP to run PHPUnit on the FunSetsTest.php script. Outside that, DTrace is called and the PID of PHP is passed to the D script $target variable so the probes fire just for this invocation of PHP. Note the quoting around the PHP function name passed to DTrace. The parameter must have double quotes included so DTrace knows it is a string. The output is: PHPUnit 3.7.28 by Sebastian Bergmann. ......-> FunSetsTest::testUnionContainsAllElements -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::union <- FunSets::union -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertFalse -> PHPUnit_Framework_Assert::isFalse -> {closure} -> main <- main <- {closure} <- PHPUnit_Framework_Assert::isFalse -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertFalse <- FunSetsTest::testUnionContainsAllElements ... Time: 1.85 seconds, Memory: 3.75Mb OK (9 tests, 23 assertions) The periods correspond to the successful tests before and after (and from) the test I was tracing. You can see the function entry ("->") and return ("<-") points. Cross checking with the testUnionContainsAllElements() source code confirms the two singletonSet() calls, one union() call, two assertTrue() calls and finally an assertFalse() call. These assertions have a contains() call as a parameter, so contains() is called before the PHPUnit assertion functions are run. You can see contains() being called recursively, and how the closures are invoked. If you want to focus on the application logic and suppress the PHPUnit function trace, you could turn off tracing when assertions are being checked by adding D clauses checking the entry and exit of assertFalse() and assertTrue(). But if you want to see all of PHPUnit's code flow, you can modify the functree.d code that sets and unsets self-follow, and instead change it to toggle the variable in request-startup and request-shutdown probes: php$target:::request-startup { self->follow = 1 } php$target:::request-shutdown { self->follow = 0 } Be prepared for a large amount of output!

    Read the article

  • June 22-24, 2010 in London City Level 400 SQL Server Performance Monitoring & Tuning Workshop

    - by sqlworkshops
    We are organizing the “3 Day Level 400 SQL Server Performance Monitoring & Tuning Workshop” for the 1st time in London City during June 22-24, 2010.Agenda is located @ www.sqlworkshops.com/workshops & you can register @ www.sqlworkshops.com/ruk. Charges: £ 1800 (5% discount for those who register before 21st May, £ 1710).In this 3 Day Level 400 hands-on workshop, unlike short SQLBits sessions, we go deeper on the tuning topics. Not sure if this will be a good use of your time & money? Watch our webcasts @ www.sqlworkshops.com/webcasts.We are trying to balance these commercial offerings with our free community contributions. Financially: These workshops are essential for us to stay in business!Feedback from Finland workshop posted by Jukka, Wärtsilä Oyj on February 23, 2010 to the LinkedIn SQL Server User Group Finland (more feedbacks @ www.sqlworkshops.com/feedbacks):Just want to start this thread and give some feedback on the Workshop that I attended last week at Microsoft.Three days in a row, deep dive into the query optimization and performance monitoring :-) I must say, that the SQL guru Ramesh has all the tricks up in his sleeves.The workshop was very helpful and what's most important: no slide show marathon: samples after samples explained very clearly and with our own class room SQL servers we can try the same stuff while Ramesh typed his own samples.If the workshop will be rearranged, I can most willingly recommend it to anyone who wants to know what's "under the hood" of SQL Server 2008.Once again, thank you Microsoft and Ramesh to make this happen. May the force be with us all :-)Hope to see you @ the Workshop. Feel free to pass on this information to your SQL Server colleagues.-ramesh-www.sqlbits.com/speakers/r_meyyappan/default.aspx

    Read the article

  • multiple wildcard entries

    - by Murali
    my client has around 300,000 domains and they just have a wildcard for all of them * A 12.12.12.12 Now they want to create a sub domain that points to a different IP and still have the flexibility of wildcard, something like ww1.* A 24.24.24.24 * A 12.12.12.12 Looks like in BIND, the lower "*" is catch-all and taking over every query and hence ww1 is not working. One of solutions offered by IT folks was to create seperate 300K zones for just "ww1" and leave the "*" wildcard. Are there any other DNS software's that can achieve this task easily? Any other ways to deal?

    Read the article

  • SQL SERVER – Table Variables and Transactions – SQL in Sixty Seconds #007 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Misconception and Resolution. Quite often I have seen people getting confused with certain behavior of the T-SQL. They expect SQL to behave certain way and SQL Server behave differently. This kind of issue often creates confusion and frustration. Sometime I have seen them also confusing it with bug and submitting the bug, where reality is totally different. Similar concept which are going to see today. I have seen quite commonly developer assuming that table various will be rolled back when transaction is rolled back. This sixty seconds video describes that table various are not rolled back when transactions are rolled back. More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • Is Internet routing (BGP) fully automated?

    - by Adal
    If all the routing tables on the Internet would be erased simultaneously, will the routers be able to rediscover them automatically? I'm having an argument with a colleague who says that the RIPE routing tables are essential, but I remember reading that if the tables disappeared, the BGP protocol will allow routers to rediscover working routes between nodes by querying their neighbors which in turn will query their neighbors until a working route will be detected. Then that route will be used to repopulate the routing tables. After a while, all the routes will be restored (not necessarily the optimal routes). Is that correct?

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • Writing your own Makefile for OpenBSD ports

    - by The_dude_man
    I have an OpenBSD box running Python 2.6. I want to install py-setuptools, but that package is built against 2.5. I was curious as to what the Makefile for py-setuptools [Someone with rep, fix me please][http://www.openbsd.org/cgi-bin/cvsweb/ports/devel/py-setuptools/Makefile?rev=1.13;content-type=text%2Fx-cvsweb-markup ] looks like, to see if it mentioned anything about Python 2.5 as a dependency. I did not find anything version-dependent. I typed make install in /usr/ports/devel/py-setuptools on a whim, and it blew up because of failed missing Python 2.5 dependency. That is expected. My question is, How do I modify the Makefile to build against Python 2.6 ? I came across this in the man pages [please fix me][http://www.openbsd.org/cgi-bin/man.cgi?query=port-modules&sektion=5&arch=&apropos=0&manpath=OpenBSD+4.6#CORE+MODULES] , but I am still clueless how to specify what version to build against. Also, I dont see anything that actually installs the egg.

    Read the article

  • NAT for Sprint Nexus S "Portable Wi-Fi hotspot"

    - by Jon Rodriguez
    I am on a 2010 Macbook Air connected to the web over wifi tethering on my Sprint Nexus S. I want to be able to host a few files using MAMP, but it seems that Sprint is running a NAT. When I query checkip.dyndns.org right now, it returns 68.27.228.75. However, trying to navigate to that IP fails (even though I do have MAMP's Apache running on port 80, as verified via loopback). When I whois 68.27.228.75, it appears to be a Sprint address, with NetName "SPRINTPCS" and OrgName "Sprint Nextel Corporation". So, is there some way I can circumvent Sprint's NAT to allow people to connect to my server that is running on a Nexus S Portable Wi-Fi hotspot?

    Read the article

  • Writing a Book, and Moving my Blog

    - by Ben Nevarez
    I started blogging about SQL Server here at SQLblog back in July, 2009 and it was a lot of fun, I enjoyed it a lot. Then later, after a series of blog posts about the Query Optimizer, I was invited to write an entire book about that same topic. But after a few months I realized that it was going to be hard to continue both blogging and writing chapters for a book, this in addition to my regular day job, so I decided to stop blogging for a little while.   Now that I have finished the last chapter of the book and I am working on the final chapter reviews, I decided to start blogging again. This time I am moving my blog to   http://www.benjaminnevarez.com   Same as my previous posts I plan to write about my topics of interest, like the relational engine, and basically anything related to SQL Server. Hopefully you find my new blog interesting and useful.   Finally, I would like to thank Adam for allowing me to blog here. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Developers are strange

    - by DavidWimbush
    Why do developers always use the GUI tools in SQL Server? I've always found this irritating and just vaguely assumed it's because they aren't familiar with SQL syntax. But when you think about it it, it's a genuine puzzle. Developers type code all day - really heavy code too like generics, lamda functions and extension methods. They (thankfully) scorn the Visual Studio stuff where you drag a table onto the class and it pastes in lots of code to query the table into a DataSet or something. But when they want to add a column to a table, without fail they dive into the graphical table designer. And half the time the script it generates does horrible things like copy the table to another one with the new column, delete the old table, and rename the new table. Which is fine if your users don't care about uptime. Is ALTER TABLE ADD <column definition> really that hard? I just don't get it.

    Read the article

  • logparser Message with error codes

    - by nsr81
    Hi All, Is there anyway to get complete error message using LogParser? When I run the following query: logparser -i:EVT -o:NAT "SELECT TimeGenerated,EventID,Message from System WHERE EventTypeName='Error event'" I get the following output: 2009-09-02 19:35:44 7000 The USB Mass Storage Driver service failed to start due to the following error: %%1058 The full "Message" in EventViewer is: Description: The USB Mass Storage Driver service failed to start due to the following error: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. How can I obtain complete message using logparser?

    Read the article

  • Revamped Google Webmaster Tools

    With a positive surprise I realized today that Google's Webmaster Tools had some minor overhauling and provide some more details than before. Most obvious are the changes on the dashboard where the Top Search Queries now provide information about impressions and clicktroughs instead of the rankings before. Only the links of the search expressions are missing. It seems that the Top search queries were in the focus of this update. The section is now spiced with detailed graphs about what happened during selectable periods on your site. Well, seems that the Webmaster Tools mimic a stripped-down version of Google Analytics... I was very pleased by the details that are offered when you click on a single query term. Really nice to see the search rankings and your responsible URLs at the same time. Before, you had to put two browser instances side-by-side to achieve this kind of overview. Personally, I like the approach to visualize statistics the way Google or other providers do. It gives you a quick and informative overview, and enables you to dig further into details about peaks and lows on your visits, page impressions or clickthroughs.

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • SEO impact on subdomain for full name and obscure ccTLD

    - by Dan Christian
    There have been a few questions on subdomains and their impact on SEO, mostly in comparison to subfolders. The closest question I've found is this question but it still doesn't completely answer my query. I'm setting up a blog for 'Sam Smith'. It's imperative the SEO is based around his full name as he is a prominent blogger and his name is his value. All ccTLD variations of 'samsmith' (samsmith.com, samsmith.cc etc) are taken. However there has been the opportunity to register an obscure ccTLD for 'smith'. In regards to SEO value purely from the URL... 1) Will there be any negative SEO implications on searches for 'Sam Smith' when setting up the subdomain as 'sam.smith.' compared to a more regular 'samsmith.' domain? Will a search engine recognise the subdomain as the full name as oppose to just 'smith'? 2) Are there any negative SEO implications with an obscure ccTLD. For instance if Sam Smith was a prominent blogger in Canada with most of his audience based there, would there be any negative SEO if he had, for example, a .co ccTLD.

    Read the article

  • How to boot between OSes from inside each OS? in a Windows/Ubuntu dual boot system

    - by TheCompander
    My ideal scenario is that there is a script/command to boot into the alternate OS from the current OS you are in, restarting the same OS without running the script/command will return it to the same OS. Currently I have grub setup to remember the last OS booted, using GRUB_DEFAULT=saved and GRUB_SAVEDEFAULT=true, I'd like to keep this option. I have read about the ability to manipulate grub from within Ubuntu to boot into windows, shown in this link. Is there a way to similarly boot into Ubuntu from within Windows? I am primarily connecting to this device remotely and hence my query.

    Read the article

  • install yum on fedora core 6

    - by Thomas
    hi, I have installed yum through rpm -ivh yum-3.0-6.noarch.rpm. The result came as [root@02e7709 ~]# rpm -ivh yum-3.0-6.noarch.rpm Preparing... ########################################### [100%] package yum-3.0-6 is already installed I used this To query a RPM package, using the command: [root@02e7709 ~]# rpm -q yum-3.0-6.noarch.rpm Reply as follows: [root@02e7709 ~]# rpm -q yum-3.0-6.noarch.rpm package yum-3.0-6.noarch.rpm is not installed both give different reply. But yum not installed I think. Whats the problem here package yum-3.0-6.noarch.rpm is not installed I used yum install subversion This follows [root@02e7709 ~]# yum install subversion Loading "installonlyn" plugin Setting up Install Process Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates What is the error baseurl for repo?

    Read the article

  • Apache disable DNS lookups

    - by odeceixe
    I'm using Debian 4.3.2-1 and Apache 2 on my production server. Watching the logs, I noticed Apache is resolving client's hostnames even with HostnameLookups Off in apache2.conf. I want to avoid these lookups so I'm guessing Apache is making this DNS query because I have mod_authz_host enabled. When I try to unlink this module, I get several modules complaining because they use the Order directive. How is the clean way to go? Should I comment all Order directives like Order allow,deny Deny from all Is this the only way to stop Apache from making DNS requests? I would like to deny access to .htaccess files and some rules like that.

    Read the article

  • DNS Server Spoofed Request Amplification DDoS - Prevention

    - by Shackrock
    I've been conducting security scans, and a new one popped up for me: DNS Server Spoofed Request Amplification DDoS The remote DNS server answers to any request. It is possible to query the name servers (NS) of the root zone ('.') and get an answer which is bigger than the original request. By spoofing the source IP address, a remote attacker can leverage this 'amplification' to launch a denial of service attack against a third-party host using the remote DNS server. General Solution: Restrict access to your DNS server from public network or reconfigure it to reject such queries. I'm hosting my own DNS for my website. I'm not sure what the solution is here... I'm really looking for some concrete detailed steps to patch this, but haven't found any yet. Any ideas? CentOS5 with WHM and CPanel. Also see: http://securitytnt.com/dns-amplification-attack/

    Read the article

  • Take Control of Workflow with Workflow Analyzer!

    - by user793553
    Take Control of Workflow with Workflow Analyzer! Immediate Analysis and Output of your EBS Workflow Environment The EBS Workflow Analyzer is a script that reviews the current Workflow Footprint, analyzes the configurations, environment, providing feedback, and recommendations on Best Practices and areas of concern. Go to Doc ID 1369938.1  for more details and script download with a short overview video on it. Proactive Benefits: Immediate Analysis and Output of Workflow Environment Identifies Aged Records Identifies Workflow Errors & Volumes Identifies looping Workflow items and stuck activities Identifies Workflow System Setup and configurations Identifies and Recommends Workflow Best Practices Easy To Add Tool for regular Workflow Maintenance Execute Analysis anytime to compare trending from past outputs The Workflow Analyzer presents key details in an easy to review graphical manner.   See the examples below. Workflow Runtime Data Table Gauge The Workflow Runtime Data Table Gauge will show critical (red), bad (yellow) and good (green) depending on the number of workflow items (WF_ITEMS).   Workflow Error Notifications Pie Chart A pie chart shows the workflow error notification types.   Workflow Runtime Table Footprint Bar Chart A pie chart shows the workflow error notification types and a bar chart shows the workflow runtime table footprint.   The analyzer also gives detailed listings of setups and configurations. As an example the workflow services are listed along with their status for review:   The analyzer draws attention to key details with yellow and red boxes highlighting areas of review:   You can extend on any query by reviewing the SQL Script and then running it on your own or making modifications for your own needs:     Find more details in these notes: Doc ID 1369938.1 Workflow Analyzer script for E-Business Suite Worklfow Monitoring and Maintenance Doc ID 1425053.1 How to run EBS Workflow Analyzer Tool as a Concurrent Request Or visit the My Oracle Support EBS - Core Workflow Community  

    Read the article

  • An XML file or Database?

    - by webnoob
    I am re-writing a section of my site and am trying to decide how much of a rewrite this will be. At the moment I have a web service feed that generates an xml once per day. I then use this xml file on my website to generate the general structure. I am trying to decide if this information should be located in the database or stay in the xml file. The file can range from 4mb - 12mb. The files depth can go on and on so I have to recurse to find the data I want. I use the .NET serializer classes and store the serialized file in a global variable to avoid re-serializing it each time the page is loaded. My reasons for thinking a database would be better are: I would know exactly where I am in the file by using an internal ID so I wouldn't have to recurse the file to get information. I wouldn't have to load / serialize the XML and could just use my already open database connections. Searching for the data in the file would be quicker(?) as I would just perform an SQL query rather than re-cursing the file. Has anyone got any ideas which is better and which option uses more resources on the server or be quicker? EDIT: The file is read every time the web page is loaded (although only serialized once). It isn't written to by standard users (only by an admin task that runs in the middle of the night). This is my initial investigation before mocking up.

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • how to pass traffic for port 80 not through openvpn?

    - by moti
    Is there a way to configure OpenVPN clients to route traffic for HTTP port 80 and HTTPS port 443 directly (i.e. not through the VPN), but through the regular default gateway the clients have. All other traffic should go through the VPN. My client is running OpenVPN on Windows and my current configuration looks like this: client dev tun proto tcp remote my-server-2 1194 resolv-retry infinite nobind persist-key persist-tun ca ../keys/ca.crt cert ../keys/client1.crt key ../keys/client1.key ns-cert-type server verb 3 route-metric 1 show-net-up dhcp-renew dhcp-release route-delay 0 120 hand-window 180 management localhost 13010 management-hold management-query-passwords management-forget-disconnect management-signal auth-user-pass

    Read the article

  • Tip 13 : Kill a process using C#, from local to remote

    - by StanleyGu
    1. My first choice is always to try System.Diagnostics to kill a process 2. The first choice works very well in killing local processes. I thought the first choice should work for killing remote process too because process.kill() method is overloaded with second argument of machine name. I pass process name plus remote machine name and call the process.kill() method 3. Unfortunately, it gives me error message of "Feature is not supported for remote machines.". Apparently, you can query but not kill a remote process using Process class in System.Diagnostics. The MSDN library document explicitly states that about Process class: Provides access to local and remote processes and enables you to start and stop local system processes. 4. I try my second choice: using System.Management to kill a process running on a remote machine. Make sure add references to System.Management.dll and System.Management.Instrumentation.dll 5. The second choice works very well in killing a remote process. Just need to make sure the account running your program must be configured to have permission to kill a process running on the remote machine.  

    Read the article

  • debian modem problems !!!

    - by Raafat
    hay there guys ... I'm a new Debian user, it looks like a very good choice 4 me, every thing is stable, free and easy to use. the problem is, I'm using my modem to establish a dial up connection to the internet (ppp) (a very old stupid way I'm forced to use for now), and using the KPPP application to do that, and nothing is working properly for me. it seems like it didn't recognize my modem or something. i already tried to make a few stuff, and now i know my modem is on /dev/tty0, so i made a link for that on /dev/modem, and query the modem using KPPP and it responded with something like: Ati : Ati0: Ati1: ... ... Ati7: with a textBox to fill up in front of each one of thees Atis, and now, when i press connect on kppp, it says modem ready, and that's it. BTW, my modem is MDC AC'97 any suggestions pleas ....

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >