Search Results

Search found 23567 results on 943 pages for 'max link'.

Page 122/943 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • networking with ssh thru wireless

    - by nkvnkv
    I am using Ubuntu 12.04 64bit on my desktop and my laptop. Have install openssh client and server on both of them. My desktop is connected to ADSL2+ Router TD-8840 with wired connection and has 192.168.1.1 IP address. My laptop is connected to 150Mbps Wireless N Router TL-WR741ND with wireless connection and has 192.168.0.1 IP address. ADSL2+ Router TD-8840 and 150Mbps Wireless N Router TL-WR741ND are connected with a wired cable by useing blue port for on wlan on TL-WR741ND. ifconfig from desktop desktop:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1d:92:37:1f:3d inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21d:92ff:fe37:1f3d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:57508 errors:0 dropped:0 overruns:0 frame:0 TX packets:44508 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:51547633 (51.5 MB) TX bytes:6371374 (6.3 MB) Interrupt:43 Base address:0x6000 eth1 Link encap:Ethernet HWaddr 00:23:cd:b1:ff:e4 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:22 Base address:0x8400 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:109 errors:0 dropped:0 overruns:0 frame:0 TX packets:109 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:12044 (12.0 KB) TX bytes:12044 (12.0 KB) ifconfig from laptop laptop:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:a0:d1:65:2a:42 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:334 errors:0 dropped:0 overruns:0 frame:0 TX packets:334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:31244 (31.2 KB) TX bytes:31244 (31.2 KB) wlan0 Link encap:Ethernet HWaddr 00:19:d2:1b:19:81 inet addr:192.168.0.101 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::219:d2ff:fe1b:1981/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1590 errors:0 dropped:0 overruns:0 frame:0 TX packets:1276 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:874763 (874.7 KB) TX bytes:315401 (315.4 KB) I can connect to desktop from laptop via ssh with no problem, internet connection on laptop and desktop is working fine, but when I want to connect to laptop from desktop via ssh in terminal I type ssh [email protected] and get ssh: connect to host 192.168.0.101 port 22: Connection timed out If anyone is able to point out whats wrong?

    Read the article

  • Razor - Unexpected "foreach" keyword after "@" character

    - by Jaco Pretorius
    I have a partial view done in razor. When I run it I get the following error - it seems like Razor gets stuck into thinking I'm writing code everywhere. Unexpected "foreach" keyword after "@" character. Once inside code, you do not need to prefix constructs like "foreach" with "@" Here is my view @model IEnumerable<SomeModel> <div> @using(Html.BeginForm("Update", "UserManagement", FormMethod.Post)) { @Html.Hidden("UserId", ViewBag.UserId) @foreach(var link in Model) { if(link.Linked) { <input type="checkbox" name="userLinks" value="@link.Id" checked="checked" />@link.Description<br /> } else { <input type="checkbox" name="userLinks" value="@link.Id" />@link.Description<br /> } } } </div>

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Taking advantage of Windows Azure CDN and Dynamic Pages in ASP.NET - Caching content from hosted services

    - by Shawn Cicoria
    With the updates to Windows Azure CDN announced this week [1] I wanted to help illustrate the capability with a working sample that will serve up dynamic content from an ASP.NET site hosted in a WebRole. First, to get a good overview of the capability you can read the Overview of the Windows Azure CDN [2] content on MSDN. When you setup the ability to cache content from a hosted service, the requirement is to provide a path to your role’s DNS endpoint that ends in the path “/cdn”.  Additionally, you then map CDN to that service. What WAZ CDN does, is allow you to then map that through the CDN to your host.  The CDN will then make a request to your host on your client’s behalf. The requirement is still that your client, and any Url’s that are to be serviced through the CDN and this capability have to use the CDN DNS name and not your host – no different than what CDN does for Blog storage. The following 2 URL’s are samples of how the client needs to issue the requests. Windows Azure hosted service URL: http: //myHostedService.cloudapp.net/cdn/music.aspx   - for regular “dynamic” content Windows Azure CDN URL: http: //<identifier>.vo.msecnd.net/music.aspx   - for CDN “cachable” content. The first URL path’s the request direct to your host into the Azure datacenter.  The 2nd URL paths the request through the CDN infrastructure, where CDN will make the determination to request the content on behalf of the client to the Azure datacenter and your host on the /cdn path. The big advantage here is you can apply logic to your content creation.  What’s important is emitting the CDN friendly headers that allow CDN to request and re-request only when you designate based upon it’s rules of “staleness” as described in the overview page. With IIS7.5 there is an underlying issue when the Managed Module “OutputCache” is enabled that in order to emit a good header for your content, you’ll need to remove, and in my sample, helps provide CDN friendly headers.  You get IIS 7.5 when running under OS Family “2” in your service configuration. By default, and when the OutputCache managed module is loaded, if you use the HttpResponse.CachePolicy to set the Http Headers for “max-age” when the HttpCacheability is “Public”, you will NOT get the “max-age” emitted as part of the “Cache-control:” header.  Instead, the OutputCache module will remove “max-age” and just emit “public”.  It works ok when Cacheability is set to “private”. To work around the issue and ensure your code as follows emits the full max-age along with the public option, you need to remove as follows: <system.webServer>   <modules runAllManagedModulesForAllRequests="true">     <remove name="OutputCache"/>   </modules> </system.webServer>   Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(rv));   In the attached solution, the way I approached it was to have a VirtualApplication under the root site that has it’s own web.config  - this VirtualApplication is the /cdn of the site and when deployed to Azure as a Web Role will surface as a distinct IIS Application – along with a separate AppDomain. The CDN Sample is a simple Web Forms site that the /default landing page contains 3 IFrames to host: 1. Content direct from the host @   http://xxxx.cloudapp.net/cdn 2. Content via the CDN @ http://azxxx.vo.msecnd.net  3. Simple list of recent requests – showing where the request came from.   When you run the sample the first time you hit the page, both the Host and the CDN will cause 2 initial requests to hit the host.  You won’t see the first requests in the list because of timing – but if you refresh, you’ll see that the list will show that you have 2 requests initially. 1. sourced direct from the Browser to the HOST 2. sourced via the CDN The picture above shows the call-outs of each of those requests – green rows showing requests coming direct to the HOST, yellow showing the CDN request.  The IP addresses of the green items are direct from the client, where the CDN is from the CDN data center. As you refresh the page (hit Ctrl+F5 to force a full refresh and avoid “304 – not changed”) you’ll see that the request to the HOST get’s processed direct; but the request to the CDN endpoint is serviced direct from the CDN and doesn’t incur any additional request back to the HOST. The following is the Headers from the CDN response (Status-Line) HTTP/1.1 200 OK Age 13 Cache-Control public, max-age=300 Connection keep-alive Content-Length 6212 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:14 GMT Expires Fri, 11 Mar 2011 20:52:01 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   The following are the Headers from the HOST response (Status-Line) HTTP/1.1 200 OK Cache-Control public, max-age=300 Content-Length 6189 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:15 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   You can see that with the CDN request, the countdown (age) starts for aging the content. The full sample is located here: CDNSampleSite.zip [1] http://blogs.msdn.com/b/windowsazure/archive/2011/03/09/now-available-updated-windows-azure-sdk-and-windows-azure-management-portal.aspx [2] http://msdn.microsoft.com/en-us/library/ff919703.aspx

    Read the article

  • ASP.NET XML parsing error only on FireFox..

    - by Broken Link
    I get this wired error only when I try to access the web page in FireFox.. IE works just fine.. XML Parsing Error: not well-formed Location: http://localhost/site/Home.aspx Line Number 1, Column 2:<%@ Page Language="C#" MasterPageFile="~/Site1.Master" AutoEventWireup="true" CodeBehind="Home.aspx.cs" Am I missing something?

    Read the article

  • How to optimize my PageRank calculation?

    - by asmaier
    In the book Programming Collective Intelligence I found the following function to compute the PageRank: def calculatepagerank(self,iterations=20): # clear out the current PageRank tables self.con.execute("drop table if exists pagerank") self.con.execute("create table pagerank(urlid primary key,score)") self.con.execute("create index prankidx on pagerank(urlid)") # initialize every url with a PageRank of 1.0 self.con.execute("insert into pagerank select rowid,1.0 from urllist") self.dbcommit() for i in range(iterations): print "Iteration %d" % i for (urlid,) in self.con.execute("select rowid from urllist"): pr=0.15 # Loop through all the pages that link to this one for (linker,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid): # Get the PageRank of the linker linkingpr=self.con.execute("select score from pagerank where urlid=%d" % linker).fetchone()[0] # Get the total number of links from the linker linkingcount=self.con.execute("select count(*) from link where fromid=%d" % linker).fetchone()[0] pr+=0.85*(linkingpr/linkingcount) self.con.execute("update pagerank set score=%f where urlid=%d" % (pr,urlid)) self.dbcommit() However, this function is very slow, because of all the SQL queries in every iteration >>> import cProfile >>> cProfile.run("crawler.calculatepagerank()") 2262510 function calls in 136.006 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 136.006 136.006 <string>:1(<module>) 1 20.826 20.826 136.006 136.006 searchengine.py:179(calculatepagerank) 21 0.000 0.000 0.528 0.025 searchengine.py:27(dbcommit) 21 0.528 0.025 0.528 0.025 {method 'commit' of 'sqlite3.Connecti 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler 1339864 112.602 0.000 112.602 0.000 {method 'execute' of 'sqlite3.Connec 922600 2.050 0.000 2.050 0.000 {method 'fetchone' of 'sqlite3.Cursor' 1 0.000 0.000 0.000 0.000 {range} So I optimized the function and came up with this: def calculatepagerank2(self,iterations=20): # clear out the current PageRank tables self.con.execute("drop table if exists pagerank") self.con.execute("create table pagerank(urlid primary key,score)") self.con.execute("create index prankidx on pagerank(urlid)") # initialize every url with a PageRank of 1.0 self.con.execute("insert into pagerank select rowid,1.0 from urllist") self.dbcommit() inlinks={} numoutlinks={} pagerank={} for (urlid,) in self.con.execute("select rowid from urllist"): inlinks[urlid]=[] numoutlinks[urlid]=0 # Initialize pagerank vector with 1.0 pagerank[urlid]=1.0 # Loop through all the pages that link to this one for (inlink,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid): inlinks[urlid].append(inlink) # get number of outgoing links from a page numoutlinks[urlid]=self.con.execute("select count(*) from link where fromid=%d" % urlid).fetchone()[0] for i in range(iterations): print "Iteration %d" % i for urlid in pagerank: pr=0.15 for link in inlinks[urlid]: linkpr=pagerank[link] linkcount=numoutlinks[link] pr+=0.85*(linkpr/linkcount) pagerank[urlid]=pr for urlid in pagerank: self.con.execute("update pagerank set score=%f where urlid=%d" % (pagerank[urlid],urlid)) self.dbcommit() This function is 20 times faster (but uses a lot more memory for all the temporary dictionaries) because it avoids the unnecessary SQL queries in every iteration: >>> cProfile.run("crawler.calculatepagerank2()") 64802 function calls in 6.950 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.004 0.004 6.950 6.950 <string>:1(<module>) 1 1.004 1.004 6.946 6.946 searchengine.py:207(calculatepagerank2 2 0.000 0.000 0.104 0.052 searchengine.py:27(dbcommit) 23065 0.012 0.000 0.012 0.000 {meth 'append' of 'list' objects} 2 0.104 0.052 0.104 0.052 {meth 'commit' of 'sqlite3.Connection 1 0.000 0.000 0.000 0.000 {meth 'disable' of '_lsprof.Profiler' 31298 5.809 0.000 5.809 0.000 {meth 'execute' of 'sqlite3.Connectio 10431 0.018 0.000 0.018 0.000 {method 'fetchone' of 'sqlite3.Cursor' 1 0.000 0.000 0.000 0.000 {range} But is it possible to further reduce the number of SQL queries to speed up the function even more?

    Read the article

  • How big can my SharePoint 2010 installation be?

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). 3 years ago, I had published “How big can my SharePoint 2007 installation be?” Well, SharePoint 2010 has significant under the covers improvements. So, how big can your SharePoint 2010 installation be? There are three kinds of limits you should know about Hard limits that cannot be exceeded by design. Configurable that are, well configurable – but the default values are set for a pretty good reason, so if you need to tweak, plan and understand before you tweak. Soft limits, you can exceed them, but it is not recommended that you do. Before you read any of the limits, read these two important disclaimers - 1. The limit depends on what you’re doing. So, don’t take the below as gospel, the reality depends on your situation. 2. There are many additional considerations in planning your SharePoint solution scalability and performance, besides just the below. So with those in mind, here goes.   Hard Limits - Zones per web app 5 RBS NAS performance Time to first byte of any response from NAS must be less than 20 milliseconds List row size 8000 bytes driven by how SP stores list items internally Max file size 2GB (default is 50MB, configurable). RBS does not increase this limit. Search metadata properties 10,000 per item crawled (pretty damn high, you’ll never need to worry about it). Max # of concurrent in-memory enterprise content types 5000 per web server, per tenant Max # of external system connections 500 per web server PerformancePoint services using Excel services as a datasource No single query can fetch more than 1 million excel cells Office Web Apps Renders One doc per second, per CPU core, per Application server, limited to a maximum of 8 cores.   Configurable Limits - Row Size Limit 6, configurable via SPWebApplication.MaxListItemRowStorage property List view lookup 8 join operations per query Max number of list items that a single operation can process at one time in normal hours 5000 Configurable via SPWebApplication.MaxItemsPerThrottledOperation   Also you get a warning at 3000, which is configurable via SPWebApplication.MaxItemsPerThrottledOperationWarningLevel   In addition, throttle overrides can be requested, throttle overrides can be disabled, and time windows can be set when throttle is disabled. Max number of list items for administrators that a single operation can process at one time in normal hours 20000 Configurable via SPWebApplication.MaxItemsPerThrottledOperationOverride Enumerating subsites 2000 Word and Powerpoint co-authoring simultaneous editors 10 (Hard limit is 99). # of webparts on a page 25 Search Crawl DBs per search service app 10 Items per crawl db 25 million Search Keywords 200 per site collection. There is a max limit of 5000, which can then be modified by editing the web.config/client.config. Concurrent # of workflows on a content db 15. Workflows running in the timer service are not counted in this limit. Further workflows are queued. Can be configured via the Set-SPFarmConfig powershell commandlet. Number of events picked by the workflow timer job and delivered to workflows 100. You can increase this limit by running additional instances of the workflow timer service. Visio services file size 50MB Visio web drawing recalculation timeout 120 seconds Configurable via – Powershell commandlet Set-SPVisioPerformance Visio services minimum and maximum cache age for data connected diagrams 0 to 24 hours. Default is 60 minutes. Configurable via – Powershell commandlet Set-SPVisioPerformance   Soft Limits - Content Databases 300 per web app Application Pools 10 per web server Managed Paths 20 per web app Content Database Size 200GB per Content DB Size of 1 site collection 100GB # of sites in a site collection 250,000 Documents in a library 30 Million, with nesting. Depends heavily on type and usage and size of documents. Items 30 million. Depends heavily on usage of items. SPGroups one SPUser can be in 5000 Users in a site collection 2 million, depends on UI, nesting, containers and underlying user store AD Principals in a SPGroup 5000 SPGroups in a site collection 10000 Search Service Instances 20 Indexed Items in Search 100 million Crawl Log entries 100 million Search Alerts 1 million per search application Search Crawled Properties 1/2 million URL removals in search 100 removals per operation User Profiles 2 million per service application Social Tags 500 million per social database Comment on the article ....

    Read the article

  • How does one paint the entire row's background in a QStyledItemDelegate ?

    - by Casey Link
    I have a QTableView which I am setting a custom QStyledItemDelegate on. In addition to the custom item painting, I want to style the row's background color for the selection/hovered states. The look I am going for is something like this KGet screenshot: Here is my code: void MyDelegate::paint( QPainter* painter, const QStyleOptionViewItem& opt, const QModelIndex& index ) const { QBrush backBrush; QColor foreColor; bool hover = false; if ( opt.state & QStyle::State_MouseOver ) { backBrush = opt.palette.color( QPalette::Highlight ).light( 115 ); foreColor = opt.palette.color( QPalette::HighlightedText ); hover = true; } QStyleOptionViewItemV4 option(opt); initStyleOption(&option, index); painter->save(); const QStyle *style = option.widget ? option.widget->style() : QApplication::style(); const QWidget* widget = option.widget; if( hover ) { option.backgroundBrush = backBrush; } painter->save(); style->drawPrimitive(QStyle::PE_PanelItemViewItem, &option, painter, widget); painter->restore(); switch( index.column() ) { case 0: // we want default behavior style->drawControl(QStyle::CE_ItemViewItem, &option, painter, widget); break; case 1: // some custom drawText break; case 2: // draw a QStyleOptionProgressBar break; } painter->restore(); } The result is that each individual cell receives the mousedover background only when the mouse is over it, and not the entire row. It is hard to describe so here is a screenshot: In that picture the mouse was over the left most cell, hence the highlighted background.. but I want the background to be drawn over the entire row. How can I achieve this? Edit: With some more thought I've realized that the QStyle::State_MouseOver state is only being passed for actual cell which the mouse is over, and when the paint method is called for the other cells in the row QStyle::State_MouseOver is not set. So the question becomes is there a QStyle::State_MouseOver_Row state (answer: no), so how do I go about achieving that?

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • help with django and accented characters?

    - by Asinox
    Hi guys, i have a problem with my accented characters, Django admin save my data without encoding to something like "&aacute;" Example: if im trying a word like " Canción ", i would like to save in this way: Canci&oacute;n, and not Canción. im usign Sociable app: {% load sociable_tags %} {% get_sociable Facebook TwitThis Google MySpace del.icio.us YahooBuzz Live as sociable_links with url=object.get_absolute_url title=object.titulo %} {% for link in sociable_links %} <a href="{{ link.link }}"><img alt="{{ link.site }}" title="{{ link.site }}" src="{{ link.image }}" /></a> {% endfor %} But im getting error if my object.titulo (title of the article) have a accented word. aught KeyError while rendering: u'\xfa' Any idea ? i had in my SETTING: DEFAULT_CHARSET = 'utf-8' i had in my mysql database: utf8_general_ci thanks, sorry with my English

    Read the article

  • Create edges in Blender

    - by Mikey
    I've worked with 3DS Max in Uni and am trying to learn Blender. My problem is I know a lot of simple techniques from 3DS max that I'm having trouble translating into Blender. So my question is: Say I have a poly in the middle of a mesh and I want to split it in two. Simply adding an edge between two edges. This would cause a two 5gons either side. It's a simple technique I use every now and then when I want to modify geometry. It's called "Edge connect" in 3DS Max. In Blender the only edge connect method I can find is to create edge loops, not helpful when aiming at low poly iPhone games. Is there an equivalent in blender?

    Read the article

  • Select comma separated result from via comma separated parameter

    - by Rodney Vinyard
    Select comma separated result from via comma separated parameter PROCEDURE [dbo].[GetCommaSepStringsByCommaSepNumericIds] (@CommaSepNumericIds varchar(max))   AS   BEGIN   /* exec GetCommaSepStringsByCommaSepNumericIds '1xx1, 1xx2, 1xx3' */ DECLARE @returnCommaSepIds varchar(max); with cte as ( select distinct Left(qc.myString, 1) + '-' + substring(qc.myString, 2, 9) + '-' + substring(qc.myString, 11, 7) as myString from q_CoaRequestCompound qc               JOIN               dbo.SplitStringToNumberTable(@CommaSepNumericIds) AS s               ON               qc.q_CoaRequestId = s.ID where SUBSTRING(upper(myString), 1, 1) in('L', '?') ) SELECT @returnCommaSepIds = COALESCE(@returnCommaSepIds + ''',''', '''') + CAST(myString AS varchar(2x)) FROM cte;   set @returnCommaSepIds = @returnCommaSepIds + '''' SELECT @returnCommaSepIds   End   FUNCTION [dbo].[SplitStringToNumberTable] (        @commaSeparatedList varchar(max) ) RETURNS @outTable table (        ID int ) AS BEGIN        DECLARE @parsedItem varchar(10), @Pos int          SET @commaSeparatedList = LTRIM(RTRIM(@commaSeparatedList))+ ','        SET @commaSeparatedList = REPLACE(@commaSeparatedList, ' ', '')        SET @Pos = CHARINDEX(',', @commaSeparatedList, 1)          IF REPLACE(@commaSeparatedList, ',', '') <> ''        BEGIN               WHILE @Pos > 0               BEGIN                      SET @parsedItem = LTRIM(RTRIM(LEFT(@commaSeparatedList, @Pos - 1)))                      IF @parsedItem <> ''                            BEGIN                                   INSERT INTO @outTable(ID)                                   VALUES (CAST(@parsedItem AS int)) --Use Appropriate conversion                            END                            SET @commaSeparatedList = RIGHT(@commaSeparatedList, LEN(@commaSeparatedList) - @Pos)                            SET @Pos = CHARINDEX(',', @commaSeparatedList, 1)               END        END           RETURN END

    Read the article

  • Example: Controlling randomizer using code contracts

    - by DigiMortal
    One cool addition to Visual Studio 2010 is support for code contracts. Code contracts make sure that all conditions under what method is supposed to run correctly are met. Those who are familiar with unit tests will find code contracts easy to use. In this posting I will show you simple example about static contract checking (example solution is included). To try out code contracts you need at least Visual Studio 2010 Standard Edition. Also you need code contracts package. You can download package from DevLabs Code Contracts page. NB! Speakers, you can use the example solution in your presentations as long as you mention me and this blog in your sessions. Solution has readme.txt file that gives you steps to go through when presenting solution in sessions. This blog posting is companion posting for Visual Studio solution referred below. As an example let’s look at the following class. public class Randomizer {     public static int GetRandomFromRange(int min, int max)     {         var rnd = new Random();         return rnd.Next(min, max);     }       public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires(min < max, "Min must be less than max");           var rnd = new Random();         return rnd.Next(min, max);     } } GetRandomFromRange() method returns results without any checking. GetRandomFromRangeContracted() uses one code contract that makes sure that minimum value is less than maximum value. Now let’s run the following code. class Program {     static void Main(string[] args)     {         var random1 = Randomizer.GetRandomFromRange(0, 9);         Console.WriteLine("Random 1: " + random1);           var random2 = Randomizer.GetRandomFromRange(1, 1);         Console.WriteLine("Random 2: " + random2);           var random3 = Randomizer.GetRandomFromRangeContracted(5, 5);         Console.WriteLine("Random 3: " + random3);           Console.WriteLine(" ");         Console.WriteLine("Press any key to exit ...");         Console.ReadKey();     } } As we have not turned on support for code contracts the code runs without any problems and we get no warnings by Visual Studio that something is wrong. Now let’s turn on static checking for code contracts. As you can see then code still compiles without any errors but Visual Studio warns you about possible problems with contracts. Click on image to see it at original size.  When we open Error list and run our application we get the following output to errors list. Note that these messages are not shown immediately. There is little delay between application starting and appearance of these messages. So wait couple of seconds before going out of your mind. Click on image to see it at original size.  If you look at these warnings you can see that warnings show you illegal calls and also contracts against what they are going. Third warning points to GetRandomFromRange() method and shows that there should be also problem that can be detected by contract. Download Code Contracts example VS2010 solution | 30KB

    Read the article

  • Fixing LINQ Error: Sequence contains no elements

    - by ChrisD
    I’ve read some posts regarding this error when using the First() or Single() command.   They suggest using FirstOrDefault() or SingleorDefault() instead. But I recently encountered it when using a Sum() command in conjunction with a Where():   Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).Max(p => p.Amount);   When the Where() function eliminated all the items in the policies collection, the Sum() command threw the “Sequence contains no elements” exception.   Inserting the DefaultIfEmpty() command between the Where() and Sum(), prevents this error: Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).DefaultIfEmpty().Max(p => p.Amount);   but now throws a Null Reference exception!   The Fix: Using a combination of DefaultIfEmpty() and a null check in the Sum() command solves this problem entirely: Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).DefaultIfEmpty().Max(p =>  p==null?0 :p.Amount);

    Read the article

  • IE browser doesn't close and file download popup needs focus

    - by jkranthi
    Hii all , I am trying to to click on a link after it is active which again produces a popup ( file download) after clicking. Here i have 2 problems 1) I start the code and leave it .what the code does is -after long process -it waits for the link to be active .Once the link is active it clicks on the link and a download popups opens (if everything goes well) and then it hangs there ( showing yellow flashing in the task bar which mean i have to click on the explorer for it to process whatever is next ).every time i have to click on the IE whenever the download popup appears .Is there a way to handle this or am i doing some wrong ? 2) The next problem is even if i click on the IE .the IE doesn't get close even though i write ie.close . my code is below : ## if the link is active ie.link(:text,a).click_no_wait prompt_message = "Do you want to open or save this file?" window_title = "File Download" save_dialog =WIN32OLE.new("AutoItX3.Control") save_dialog.WinGetText(window_title) save_dialog_obtained =save_dialog.WinWaitActive(window_title) save_dialog.WinKill(window_title) # end #' #some more code -normal puts statements # ie.close ie is hanging up for some strange reason ..?

    Read the article

  • Nant task sysinfo verbose - fails

    - by Broken Link
    Nant.core.dll : 0.86.2898.0 I can not get the following tag working on my machine. <sysinfo verbose="true" /> <sysinfo /> It gives me the following error. If I comment out those two lines I'm able to build. I google'd but not much help. Any idea? NAnt 0.85 (Build 0.85.1932.0; rc3; 4/16/2005) Copyright (C) 2001-2005 Gerry Shaw http://nant.sourceforge.net Buildfile: file:///C:/xyz/source/Default.build Target framework: Microsoft .NET Framework 1.1 Target(s) specified: all [tstamp] Thursday, July 30, 2009 3:10:24 PM. [sysinfo] Setting system information properties under sys.* BUILD FAILED Property name 'sys.env.Zen Managed Workstation' is invalid. Total time: 0 seconds.

    Read the article

  • How to copy an array of char pointers with a larger list of char pointers?

    - by Casey Link
    My function is being passed a struct containing, among other things, a NULL terminated array of pointers to words making up a command with arguments. I'm performing a glob match on the list of arguments, to expand them into a full list of files, then I want to replace the passed argument array with the new expanded one. The globbing is working fine, that is, g.gl_pathv is populated with the list of expected files. However, I am having trouble copying this array into the struct I was given. #include <glob.h> struct command { char **argv; // other fields... } void myFunction( struct command * cmd ) { char **p = cmd->argv; char* program = *p++; // save the program name (e.g 'ls', and increment to the first argument glob_t g; memset(&g, 0, sizeof(g)); int res = glob(*p, 0, NULL, &g); *p++ // increment while (*p) { glob(*p++, GLOB_APPEND, NULL, &g); // append the matches } // here i want to replace cmd->argv with the expanded g.gl_pathv memcpy(cmd->argv, g.gl_pathv, g.gl_pathc ); // this doesn't work globfree(&g); }

    Read the article

  • Linking to libcrypto for Leopard?

    - by adib
    Hi I am using Mac OS X 10.6 SDK and my deployment target is set to Mac OS 10.5. I'm linking to libcrypto (AquaticPrime requires this) and found out that my app doesn't launch on Leopard. The error is dyld: Library not loaded: /usr/lib/libcrypto.0.9.8.dylib I've tried the following workarounds but none of them work: Linking directly to libcrypto.0.9.7.dylib (the 10.6 SDK refuses to link directly with libcrypto.0.9.7.dylib. Copying the 10.5 SDK's version of libcrypto.0.9.7.dylib to the 10.6 lib directory and try t link with it (this time the link process succeeded but in Leopard the app still tries to lookup the non-existent libcrypto.0.9.8.dylib file and thus won't launch). Copying libcrypto.0.9.7.dylib from a Mac OS X 10.5.8 installation and link with it (the link was successful but the app still looks for libcrypto.0.9.8.dylib). Is there a way to link to this library and still use the 10.6 SDK? Thanks.

    Read the article

  • Is chess-like AI really inapplicable in turn-based strategy games?

    - by Joh
    Obviously, trying to apply the min-max algorithm on the complete tree of moves works only for small games (I apologize to all chess enthusiasts, by "small" I do not mean "simplistic"). For typical turn-based strategy games where the board is often wider than 100 tiles and all pieces in a side can move simultaneously, the min-max algorithm is inapplicable. I was wondering if a partial min-max algorithm which limits itself to N board configurations at each depth couldn't be good enough? Using a genetic algorithm, it might be possible to find a number of board configurations that are good wrt to the evaluation function. Hopefully, these configurations might also be good wrt to long-term goals. I would be surprised if this hasn't been thought of before and tried. Has it? How does it work?

    Read the article

  • Project Euler 3: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 3.  As always, any feedback is welcome. # Euler 3 # http://projecteuler.net/index.php?section=problems&id=3 # The prime factors of 13195 are 5, 7, 13 and 29. # What is the largest prime factor of the number # 600851475143? import time start = time.time() def largest_prime_factor(n): max = n divisor = 2 while (n >= divisor ** 2): if n % divisor == 0: max, n = n, n / divisor else: divisor += 1 return max print largest_prime_factor(600851475143) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Why does my model render differently than its preview?

    - by Raven Dreamer
    I've been importing .fbx files that I made with 3DS Max 2012 into Unity, and it's quite neat to see my models running around in game. However, I can't help but notice that the models, as they're rendered in game, vary substantially from what they look like in the preview (and also what they looked like in 3DS Max). Observe: In-Game Unity Preview 3DS Max My gut tells me that I'm not setting up Unity's lighting system properly. What, then, do I need to do, to either my scene or my model, in order to get the left-most picture to look like the middle one?

    Read the article

  • Is this a link scheme? If so, what to do? what problems can i face?

    - by guisasso
    I was asked to remodel a website, and decided to check its rank on alexa. Surprisingly, there are many, many different websites linking to it, none relevant. One particular thing about it is that none of these urls work, and they all display the exact same error when accessed, which to me is a very good indication that this is some sort of linking scheme. (besides the somewhat obvious names, it even says scheme in one of the urls !?) If so, how should i proceed about this website? What can i do if this is in fact a scheme, how can this hurt the website, what types of problems can i face, and what can i do about it? addurlnow . info dirlist15.addurlnow . info/Business___Economy/Services/page-12.html linkdirectory101 . info dirlist16.linkdirectory101 . info/Business___Economy/Services/page-15.html seonetblog . info dirlist52.seonetblog . info/Business___Economy/Affiliate_Schemes addurls . us dirlist21.addurls . us/Business___Economy/Services/page-10.html webdirectoriessite . info dirlist20.webdirectoriessite . info/Business___Economy/Services/page-6.html addurlstore . info dirlist10.addurlstore . info/business___economy/services/page-14.html ukwebdirectorys . info dirlist21.ukwebdirectorys . info/Business___Economy/Services/page-13.html

    Read the article

  • Increasing speed of circle over time as linear with Box2d

    - by Whispered
    Assume that there is a circle and it can be moved by using keyboard arrows.Is required that increasing speed over time like increasing car speed. For example; max speed is 25 and time to reach max speed shall be 5 sec. Over 5 sec the speed will reach to max speed. Does Box2d handle that situation?. I tried setting linear valocity but it seems to make the circle have constant speed instead of increased speed over time. Thank You! Note: I'm using Box2DWeb Javascript port of Box2D.

    Read the article

  • URL with no query parameters - How to distinguish.

    - by Broken Link
    Env: .NET 1.1 I got into this situation. Where I need to give a URL that someone could redirect them to our page. When they redirect they also need to tell us, what message I need to display on the page. Initially I thought of something like this. http://example.com/a.aspx?reason=100 http://example.com/a.aspx?reason=101 ... http://example.com/a.aspx?reason=115 So when we get this url based on 'reason' we can display different message. But the problem turns out to be that they can not send any query parameters at all. They want 15 difference URL's since they can't send query params. It doesn't make any sense to me to created 15 pages just to display a message. Any smart ideas,that have one URL and pass the 'reason' thru some means? EDIT: Options I'm thinking based on Answers Try HttpRequest.PathInfo or Second option I was thinking was to have a httphanlder read read the path like this - HttpContext.Request.Path based on path act. Ofcourse I will have some 15 entries like this in web.config. <add verb="*" path="reason1.ashx" type="WebApplication1.Class1, WebApplication1" /> <add verb="*" path="reason2.ashx" type="WebApplication1.Class1, WebApplication1" /> Does that look clean?

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >