Search Results

Search found 5123 results on 205 pages for 'gateway timeout'.

Page 189/205 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Table prefix for MySqlMembershipProvider

    - by choudeshell
    I have MySqlMembershipProvider working with Asp.Net MVC. My question is how can I configure the table prefix... so instead of 'my_aspnet_' prefix on the tables, I want this to be either none or defined by me. My web.config: <?xml version="1.0"?> <add name="ApplicationServices" connectionString="server=localhost;user id=root;Password=*********;database=sparkSources" providerName="MySql.Data.MySqlClient"/> <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" /> </authentication> <membership defaultProvider="MySqlMembershipProvider"> <providers> <clear/> <add name="MySqlMembershipProvider" type="MySql.Web.Security.MySQLMembershipProvider, MySql.Web, Version=6.3.4.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" autogenerateschema="true" tablePrefix="ss" connectionStringName="ApplicationServices" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" passwordStrengthRegularExpression="" applicationName="sparkSources" /> </providers> </membership> <profile> <providers> <clear/> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="ApplicationServices" applicationName="/" /> </providers> </profile> <roleManager enabled="false"> <providers> <clear/> <add name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="ApplicationServices" applicationName="/" /> <add name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider" applicationName="/" /> </providers> </roleManager> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages>

    Read the article

  • how do I make my submenu position dynamic based on the distance to the edge of the window?

    - by Mario Antoci
    I'm trying to write a jQuery script that will find the distance to the right edge of the browser window from my css class element and then position the child submenu dropdowns to the right or left depending on the available space to the right. Also it needs to revert to the default settings on hoverout. Here is what I have so far but it's not calculating properly. $(document).ready(function(){ $('#dnnMenu .subLevel').hover(function(){ if ($(window).width() - $('#dnnMenu .subLevel').offset().left - '540' >= '270') { $('#dnnMenu .subLevelRight').css('left', '270px');} else {$('#dnnMenu .subLevelRight').css('left', '-270px');} }); $(document).ready(function () { function HoverOver() { $(this).addClass('hover'); } function HoverOut() { $(this).removeClass('hover'); } var config = { sensitivity: 2, interval: 100, over: HoverOver, timeout: 100, out: HoverOut }; $("#dnnMenu .topLevel > li.haschild").hoverIntent(config); $(".subLevel li.haschild").hover(HoverOver, HoverOut); }); Basically I tried to take the width of the current window, minus the distance to the left edge of the browser of the first level submenu, minus the width of both elements together which would equal 540px, to calculate the distance to the right edge of the window when the first level submenu is hovered over. if the distance to the right of my first level submenu element is less than 540px then the second level sub menu css property is changed to position to the left instead of right. I also know that it needs to revert back to default after hover out so it can recalculate the distance from other positions within the menu structure and still have those second level submenus with enough room to still display on the right of the first level. here is css for the elements in question. #dnnMenu .subLevel{ display: none; position: absolute; margin: 0; z-index: 1210; background: #639ec8; text-transform: none;} #dnnMenu .subLevelRight{ position: absolute; display: none; left: 270px; top: 0px;} The site's not live yet and I tried to create a jsfiddle but it doesn't look right. Any help would be greatly appreciated! Best Regards, Mario

    Read the article

  • C++ SQLDriverConnect API

    - by harshalkreddy
    Hi, I am using visual studio 2008 and sql server 2008 for developing application(SQL server is in my system). I need to fetch some fields from the database. I am using the SQLDriverConnect API to connect to the database. If I use the "SQL_DRIVER_PROMPT" I will get pop window to select the data source. I don't want this window to appear. As per my understanding this window will appear if we provide insufficient information in the connection string. I think I have provided all the information. I am trying to connect with windows authentication. I tried different options but still no luck. Please help me in solving this problem. Below is the code that I am using: //******************************************************************************** // SQLDriverConnect_ref.cpp // compile with: odbc32.lib user32.lib #include <windows.h> #include <sqlext.h> int main() { SQLHENV henv; SQLHDBC hdbc; SQLHSTMT hstmt; SQLRETURN retcode; SQLWCHAR OutConnStr[255]; SQLSMALLINT OutConnStrLen; SQLCHAR ConnStrIn[255] = "DRIVER={SQL Server};SERVER=(local);DSN=MyDSN;DATABASE=MyDatabase;Trusted_Connection=yes;"; //SQLWCHAR *ConntStr =(SQLWCHAR *) "DRIVER={SQL Server};DSN=MyDSN;"; HWND desktopHandle = GetDesktopWindow(); // desktop's window handle // Allocate environment handle retcode = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv); // Set the ODBC version environment attribute if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, (SQLPOINTER*)SQL_OV_ODBC3, 0); // Allocate connection handle if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc); // Set login timeout to 5 seconds if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { SQLSetConnectAttr(hdbc, SQL_LOGIN_TIMEOUT, (SQLPOINTER)5, 0); retcode = SQLDriverConnect( // SQL_NULL_HDBC hdbc, desktopHandle, (SQLWCHAR *)ConnStrIn, SQL_NTS, OutConnStr, 255, &OutConnStrLen, SQL_DRIVER_NOPROMPT); // Allocate statement handle if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt); // Process data if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { SQLFreeHandle(SQL_HANDLE_STMT, hstmt); } SQLDisconnect(hdbc); } SQLFreeHandle(SQL_HANDLE_DBC, hdbc); } } SQLFreeHandle(SQL_HANDLE_ENV, henv); } } //******************************************************************************** Thanks in advance, Harsha

    Read the article

  • Optimize slow ranking query

    - by Juan Pablo Califano
    I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

  • HttpWebRequest possibly slowing website

    - by Steven Smith
    Using Visual studio 2012, C#.net 4.5 , SQL Server 2008, Feefo, Nopcommerce Hey guys I have Recently implemented a new review service into a current site we have. When the change went live the first day all worked fine. Since then though the sending of sales to Feefo hasnt been working, There are no logs either of anything going wrong. In the OrderProcessingService.cs in Nop Commerce's Service, i call a HttpWebrequest when an order has been confirmed as completed. Here is the code. var email = HttpUtility.UrlEncode(order.Customer.Email.ToString()); var name = HttpUtility.UrlEncode(order.Customer.GetFullName().ToString()); var description = HttpUtility.UrlEncode(productVariant.ProductVariant.Product.MetaDescription != null ? productVariant.ProductVariant.Product.MetaDescription.ToString() : "product"); var orderRef = HttpUtility.UrlEncode(order.Id.ToString()); var productLink = HttpUtility.UrlEncode(string.Format("myurl/p/{0}/{1}", productVariant.ProductVariant.ProductId, productVariant.ProductVariant.Name.Replace(" ", "-"))); string itemRef = ""; try { itemRef = HttpUtility.UrlEncode(productVariant.ProductVariant.ProductId.ToString()); } catch { itemRef = "0"; } var url = string.Format("feefo Url", login, password,email,name,description,orderRef,productLink,itemRef); var request = (HttpWebRequest)WebRequest.Create(url); request.KeepAlive = false; request.Timeout = 5000; request.Proxy = null; using (var response = (HttpWebResponse)request.GetResponse()) { if (response.StatusDescription == "OK") { var stream = response.GetResponseStream(); if(stream != null) { using (var reader = new StreamReader(stream)) { var content = reader.ReadToEnd(); } } } } So as you can see its a simple webrequest that is processed on an order, and all product variants are sent to feefo. Now: this hasnt been happening all week since the 15th (day of the implementation) the site has been grinding to a halt recently. The stream and reader in the the var content is there for debugging. Im wondering does the code redflag anything to you that could relate to the process of website? Also note i have run some SQL statements to see if there is any deadlocks or large escalations, so far seems fine, Logs have also been fine just the usual logging of Bots. Any help would be much appreciated! EDIT: also note that this code is in a method that is called and wrapped in A try catch UPDATE: well forget about the "not sending", thats because i was just told my code was rolled back last week

    Read the article

  • When to delete newly deprecated code?

    - by John
    I spent a month writing an elaborate payment system that handles both credit card payments and electronic fund transfers. My work was used on production server for about a month. I was told recently by the client that he no longer wants to use the electronic fund transfer feature. Because the way I had to interface and communicate with the credit card gateway is drastically different from the electronic fund transfer api (eg. the cc company gives transaction responses immediately after an http request, while the eft company gives transaction responses 5 business days after an http request), I spent a lot of time writing my own API to abstract common function calls like function payment(amount, pay_method,pay_freq) function updateRecurringSchedule(user_id,new_schedule) etc.. Now that the client wants to abandon the EFT feature, all my work for this abstracted payments API is obsolete. I'm deliberating over whether I should scrap my work. Here's my pro vs. con for scrapping it now: PRO 1: Eliminate code bloat PRO 2: New developers do not need to learn MY API. They only need to read the CC company's API PRO 3: Because the EFT company did not handle recurring payment schedules, refunds, and validation, I wrote my own application to do it. Although the CC company's API permitted this functionality, I opted to use mine instead so that I could streamline my code. now that EFT is out of the picture, I can delete all this confusing code and just rely on the CC company's sytsem to manage recurring billing, payment schedules, refunds, validations etc... CON 1: Although I can just delete the EFT code, it still takes time to remove the entire framework consolidates different payment systems. CON 2: with regards to PRO 3, it takes time to build functionality that integrates the payment system more closely with the CC company. CON 3: I feel insecure deleting all this work. I don't think I'll ever use it again. But, for some inexplicable reason, I just don't feel comfortable deleting this work "right now". So my question is, should I delete one month's worth recent development? If yes, should I do it immediately or wait X amount of time before doing so?

    Read the article

  • Are there any platforms where using structure copy on an fd_set (for select() or pselect()) causes p

    - by Jonathan Leffler
    The select() and pselect() system calls modify their arguments (the 'struct fd_set *' arguments), so the input value tells the system which file descriptors to check and the return values tell the programmer which file descriptors are currently usable. If you are going to call them repeatedly for the same set of file descriptors, you need to ensure that you have a fresh copy of the descriptors for each call. The obvious way to do that is to use a structure copy: struct fd_set ref_set_rd; struct fd_set ref_set_wr; struct fd_set ref_set_er; ... ...code to set the reference fd_set_xx values... ... while (!done) { struct fd_set act_set_rd = ref_set_rd; struct fd_set act_set_wr = ref_set_wr; struct fd_set act_set_er = ref_set_er; int bits_set = select(max_fd, &act_set_rd, &act_set_wr, &act_set_er, &timeout); if (bits_set > 0) { ...process the output values of act_set_xx... } } My question: Are there any platforms where it is not safe to do a structure copy of the struct fd_set values as shown? I'm concerned lest there be hidden memory allocation or anything unexpected like that. (There are macros/functions FD_SET(), FD_CLR(), FD_ZERO() and FD_ISSET() to mask the internals from the application.) I can see that MacOS X (Darwin) is safe; other BSD-based systems are likely to be safe, therefore. You can help by documenting other systems that you know are safe in your answers. (I do have minor concerns about how well the struct fd_set would work with more than 8192 open file descriptors - the default maximum number of open files is only 256, but the maximum number is 'unlimited'. Also, since the structures are 1 KB, the copying code is not dreadfully efficient, but then running through a list of file descriptors to recreate the input mask on each cycle is not necessarily efficient either. Maybe you can't do select() when you have that many file descriptors open, though that is when you are most likely to need the functionality.) There's a related SO question - asking about 'poll() vs select()' which addresses a different set of issues from this question.

    Read the article

  • How to Transfer Large File from MS Word Add-In (VBA) to Web Server?

    - by Ian Robinson
    Overview I have a Microsoft Word Add-In, written in VBA (Visual Basic for Applications), that compresses a document and all of it's related contents (embedded media) into a zip archive. After creating the zip archive it then turns the file into a byte array and posts it to an ASMX web service. This mostly works. Issues The main issue I have is transferring large files to the web site. I can successfully upload a file that is around 40MB, but not one that is 140MB (timeout/general failure). A secondary issue is that building the byte array in the VBScript Word Add-In can fail by running out of memory on the client machine if the zip archive is too large. Potential Solutions I am considering the following options and am looking for feedback on either option or any other suggestions. Option One Opening a file stream on the client (MS Word VBA) and reading one "chunk" at a time and transmitting to ASMX web service which assembles the "chunks" into a file on the server. This has the benefit of not adding any additional dependencies or components to the application, I would only be modifying existing functionality. (Fewer dependencies is better as this solution should work in a variety of server environments and be relatively easy to set up.) Question: Are there examples of doing this or any recommended techniques (either on the client in VBA or in the web service in C#/VB.NET)? Option Two I understand WCF may provide a solution to the issue of transferring large files by "chunking" or streaming data. However, I am not very familiar with WCF, and am not sure what exactly it is capable of or if I can communicate with a WCF service from VBA. This has the downside of adding another dependency (.NET 3.0). But if using WCF is definitely a better solution I may not mind taking that dependency. Questions: Does WCF reliably support large file transfers of this nature? If so, what does this involve? Any resources or examples? Are you able to call a WCF service from VBA? Any examples?

    Read the article

  • Replace Infinite loop in Flex

    - by H P
    Hello, I want to access a webservice:getMonitorData() , on creationcomplete and returns an array, in an infinite loop so that the getIndex0.text is updated each time. Flex is not able to handle an infinite loop and gives a timeout error 1502. If I run the for loop until i<2000 or so it works fine. How can replace the loop so that my webservice is accessed continiously and the result is shown in getIndex0.text. This is how my application looks like: <?xml version="1.0" encoding="utf-8"?> <s:Group xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="400" height="300" xmlns:plcservicebean="server.services.plcservicebean.*" creationComplete="clientMonitor1()"> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.rpc.CallResponder; import mx.rpc.events.FaultEvent; import mx.rpc.events.ResultEvent; [Bindable] public var dbl0:Number; //-----------Infinite Loop, Works fine if condition = i<2000------------------------ public function clientMonitor1():void{ for(var i:int = 0; ; i++){ clientMonitor(); } } public function clientMonitor():void{ var callResp:CallResponder = new CallResponder(); callResp.addEventListener(ResultEvent.RESULT, monitorResult); callResp.addEventListener(FaultEvent.FAULT, monitorFault); callResp.token = plcServiceBean.getMonitorData(); } public function monitorResult(event:ResultEvent):void{ var arr:ArrayCollection = event.result as ArrayCollection; dbl0 = arr[0].value as Number; } protected function monitorFault(event:FaultEvent):void{ Alert.show(event.fault.faultString, "Error while monitoring Data "); } ]]> </fx:Script> <fx:Declarations> <plcservicebean:PlcServiceBean id = "plcServiceBean" showBusyCursor="true" fault="Alert.show(event.fault.faultString + '\n' + event.fault.faultDetail)" /> </fx:Declarations> <mx:Form x="52" y="97" label="Double"> <mx:FormItem label = "getMonitorValue"> <s:TextInput id = "getIndex0" text = "{dbl0}"/> </mx:FormItem> </mx:Form> </s:Group>

    Read the article

  • Twitter RSS feed, [domdocument.load]: failed to open stream:

    - by dave1019
    hi i'm using the following: <?php $doc = new DOMDocument(); $doc->load('http://twitter.com/statuses/user_timeline/XXXXXX.rss'); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'desc' => $node->getElementsByTagName('description')->item(0)->nodeValue, 'link' => $node->getElementsByTagName('link')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } for($i=0;$i<=3;$i++) { $tweet=substr($arrFeeds[$i]['title'],17); $tweetDate=strtotime($arrFeeds[$i]['date']); $newDate=date('G:ia l F Y ',$tweetDate); if($i==0) { $b='style="border:none;"'; } $tweetsBox.='<div class="tweetbox" ' . $b . '> <div class="tweet"><p>' . $tweet . '</p> <div class="tweetdate"><a href="http://twitter.com/XXXXXX">@' . $newDate .'</a></div> </div> </div>'; } return $tweetsBox; ?> to return the 4 most recent tweets from a given timeline (XXXXX is the relevant feed) It seems to work fine but i've recently been getting the following error sporadically: PHP error debug Error: DOMDocument::load(http://twitter.com/statuses/user_timeline/XXXXXX.rss) [domdocument.load]: failed to open stream: HTTP request failed! HTTP/1.1 502 Bad Gateway I've read that the above code is dependant on Twitter beign available and I know it gets rather busy sometimes. Is there either a better way of receiving twits, or is there any kind of error trapping i could do to just to display "tweets are currently unavailable..." ind of message rather than causing an error. I'm usnig ModX CMS so any parse error kills the site rather than just ouputs a warning. thanks.

    Read the article

  • Java compiler rejects variable declaration with parameterized inner class

    - by Johansensen
    I have some Groovy code which works fine in the Groovy bytecode compiler, but the Java stub generated by it causes an error in the Java compiler. I think this is probably yet another bug in the Groovy stub generator, but I really can't figure out why the Java compiler doesn't like the generated code. Here's a truncated version of the generated Java class (please excuse the ugly formatting): @groovy.util.logging.Log4j() public abstract class AbstractProcessingQueue <T> extends nz.ac.auckland.digitizer.AbstractAgent implements groovy.lang.GroovyObject { protected int retryFrequency; protected java.util.Queue<nz.ac.auckland.digitizer.AbstractProcessingQueue.ProcessingQueueMember<T>> items; public AbstractProcessingQueue (int processFrequency, int timeout, int retryFrequency) { super ((int)0, (int)0); } private enum ProcessState implements groovy.lang.GroovyObject { NEW, FAILED, FINISHED; } private class ProcessingQueueMember<E> extends java.lang.Object implements groovy.lang.GroovyObject { public ProcessingQueueMember (E object) {} } } The offending line in the generated code is this: protected java.util.Queue<nz.ac.auckland.digitizer.AbstractProcessingQueue.ProcessingQueueMember<T>> items; which produces the following compile error: [ERROR] C:\Documents and Settings\Administrator\digitizer\target\generated-sources\groovy-stubs\main\nz\ac\auckland\digitizer\AbstractProcessingQueue.java:[14,96] error: improperly formed type, type arguments given on a raw type The column index of 96 in the compile error points to the <T> parameterization of the ProcessingQueueMember type. But ProcessingQueueMember is not a raw type as the compiler claims, it is a generic type: private class ProcessingQueueMember <E> extends java.lang.Object implements groovy.lang.GroovyObject { ... I am very confused as to why the compiler thinks that the type Queue<ProcessingQueueMember<T>> is invalid. The Groovy source compiles fine, and the generated Java code looks perfectly correct to me too. What am I missing here? Is it something to do with the fact that the type in question is a nested class? (in case anyone is interested, I have filed this bug report relating to the issue in this question) Edit: Turns out this was indeed a stub compiler bug- this issue is now fixed in 1.8.9, 2.0.4 and 2.1, so if you're still having this issue just upgrade to one of those versions. :)

    Read the article

  • Weird Datagrid / paint behaviour

    - by Shane.C
    The scenario: A method sends out a broadcast packet, and returned packets that are validated are deemed okay to be added as a row in my datagrid (The returned packets are from devices i want to add into my program). So for each packet returned, containing information about a device, i create a new row. This is done by first sending packets out, creating rows and adding them to a list of rows that are to be added, and then after 5 seconds (In which case all packets would have returned by then) i add the rows. Here's a few snippets of code. Here for each returned packet, i create a row and add it to a list; DataRow row = DGSource.NewRow(); row["Name"] = deviceName; row["Model"] = deviceModel; row["Location"] = deviceLocation; row["IP"] = finishedIP; row["MAC"] = finishedMac; row["Subnet"] = finishedSubnet; row["Gateway"] = finishedGateway; rowsToAdd.Add(row); Then when the timer elapses; void timerToAddRows_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { timerToAddRows.Enabled = false; try { int count = 0; foreach (DataRow rowToAdd in rowsToAdd) { DGSource.Rows.Add(rowToAdd); count++; } rowsToAdd.Clear(); DGAutoDevices.InvokeEx(f => DGAutoDevices.Refresh()); lblNumberFound.InvokeEx(f => lblNumberFound.Text = count + " new devices found."); } catch { } } So at this point, each row has been added, and i call the re paint, by doing refresh. (Note: i've also tried refreshing the form itself, no avail). However, when the datagrid shows the rows, the scroll bar / datagrid seems to have weird behavour..for example i can't highlight anything with clicks (It's set to full row selection), and the scroll bar looks like so; Calling refresh doesn't work, although if i resize the window even 1 pixel, or minimize and maximise, the problem is solved. Other things to note : The method that get's the packets and adds the rows to the list, and then from the list to the datagrid runs in it's own thread. Any ideas as to something i might be missing here?

    Read the article

  • Rewrite code from Threads to AnyEvent

    - by user1779868
    I wrote a code: use LWP::UserAgent; use HTTP::Cookies; use threads; use threads::shared; $| = 1; $threads = 50; my @urls : shared = loadf('url.txt'); my @thread_list = (); $thread_list[$_] = threads->create(\&thread) for 0 .. $threads - 1; $_->join for @thread_list; thread(); sub thread { my ($web, $ck) = browser(); while(1) { my $url = shift @urls; if(!$url) { last; } $code = $web->get($url)->code; print "[+] $url - code: $code\n"; if($code == 200) { open F, ">>200.txt"; print F $url."\n"; close F; } elsif($code == 301) { open F, ">>301.txt"; print F $url."\n"; close F; } else { open F, ">>else.txt"; print F "$url code - $code\n"; close F; } } } sub loadf { open (F, "<".$_[0]) or erroropen($_[0]); chomp(my @data = <F>); close F; return @data; } sub browser { my $web = new LWP::UserAgent; my $ck = new HTTP::Cookies; $web->cookie_jar($ck); $web->agent('Opera/9.80 (Windows 7; U; en) Presto/2.9.168 Version/11.50'); $web->timeout(5); return $web, $ck; } After its working for some time physical storage is full. Can u help me to re-write it with AnyEvent. I tried but my code didn't work. I read that it will help me to safe some memory. Thanks a lot to any helpers.

    Read the article

  • pagerAnchorBuilder - trying to add classes

    - by Bert Murphy
    I'm using Cycle2 to build a carousel gallery and I've run into a little problem regarding styling the pager buttons. What I've gathered is that Cycle2 creates its own pager span tags for each slide which is a bummer becaus I've already styled my the sub-nav markup. This should be a minor issue as long as I can assign individual classes to the spans and change my css accordingly. However, I can't get this to work. TLDR: I was hoping that I could use pagerAnchorBuilder to create individual classes for each span. I can't. Help. The original markup (pre Cycle2) looks like the following: <div id ="sub-nav" class="sub-nav"> <ul> <li><a id="available" class="available" href="#"></a></li> <li><a id="reliable" class="reliable" href="#"></a></li> <li><a id="use" class="use" href="#"></a></li> <li><a id="reports" class="reports" href="#"></a></li> <li class="last"><a id="software" class="software" href="#"></a></li> </ul> </div> With Cycle2 it looks like this (note the addition of the span tags) <div id ="sub-nav" class="sub-nav"> <ul> <li><a id="available" class="available" href="#"></a></li> <li><a id="reliable" class="reliable" href="#"></a></li> <li><a id="use" class="use" href="#"></a></li> <li><a id="reports" class="reports" href="#"></a></li> <li class="last"><a id="software" class="software" href="#"></a></li> </ul> <span class="cycle-pager-active">•</span><span>•</span><span>•</span><span>•</span><span>•</span></nav> </div> Cycle2 $('#sliding-gallery ul').cycle({ fx: 'carousel', carouselVisible: 1, speed: 'fast', timeout: 10000, slides: '>li', next: '.next', prev: '.prev', pager: '.sub-nav', pagerAnchorBuilder: function(idx, slide) { return '.sub-nav span:eq(' + idx + ')'; } });

    Read the article

  • probelm with recv() on a tcp connection

    - by michael
    Hi, I am simulating TCP communication on windows in C I have sender and a receiver communicating. sender sends packets of specific size to receiver. receiver gets them and send an ACK for each packet it received back to the sender. If the sender didn't get a specific packet (they are numbered in a header inside the packet) it sends the packet again to the receiver. Here is the getPacket function on the receiver side: //get the next packet from the socket. set the packetSize to -1 //if it's the first packet. //return: total bytes read // return: 0 if socket has shutdown on sender side, -1 error, else number of bytes received int getPakcet(char *chunkBuff,int packetSize,SOCKET AcceptSocket){ int totalChunkLen = 0; int bytesRecv=-1; bool firstTime=false; if (packetSize==-1) { packetSize=MAX_PACKET_LENGTH; firstTime=true; } int needToGet=packetSize; do { char* recvBuff; recvBuff = (char*)calloc(needToGet,sizeof(char)); if(recvBuff == NULL){ fprintf(stderr,"Memory allocation problem\n"); return -1; } bytesRecv = recv(AcceptSocket, recvBuff, needToGet, 0); if (bytesRecv == SOCKET_ERROR){ fprintf(stderr,"recv() error %ld.\n", WSAGetLastError()); totalChunkLen=-1; return -1; } if (bytesRecv == 0){ fprintf(stderr,"recv(): socket has shutdown on sender side"); return 0; } else if(bytesRecv > 0) { memcpy(chunkBuff + totalChunkLen,recvBuff,bytesRecv); totalChunkLen+=bytesRecv; } needToGet-=bytesRecv; } while ((totalChunkLen < packetSize) && (!firstTime)); return totalChunkLen; } i use firstTime because for the first time the receiver doesn't know the normal package size that the sender is going to send to it, so i use a MAX_PACKET_LENGTH to get a package and then set the normal package size to the num of bytes i have received my problem is the last package. it's size is less than the package size so lets say last package size is 2 and the normal package size is 4. so recv() gets two bytes, continues to the while condition, then totalChunkLen < packetSize because 2<4 so it iterates the loop again and the gets stuck in recv() because it's blocking because the sender has nothing to send. on the sender side i can't close the connection because i didn't ACK back, so it's kind of a deadlock. receiver is stuck because it's waiting for more packages but sender has nothing to send. i don't want to use a timeout for recv() or to insert a special character to the package header to mark that it is the last one what can i do ? thanks

    Read the article

  • Type Mismatch using VBScript to create Pivot Table/Chart

    - by Rodricks
    I get Run time error:Type mismatch for the following code: Dim Field Field="Gen8" '''' ============================================================================== EXCEL Sheet '==============Errors -Stacked Chart by Year and Week --ALL WEEKS ''''=================================================== objExcel.ActiveWorkbook.Worksheets.Add SheetNumber = SheetNumber ' add adds in front so sheetnumber stays 1 objExcel.Sheets(SheetNumber).Select objExcel.Sheets(SheetNumber).Activate objExcel.Sheets(SheetNumber).Name = "YRWk" SheetName = "SYS_Product_YRWeeks" '============== strSQLCustomers = "select isnull(AB.Week,D.Week_Num) AS YRWk,ISNULL(AB.UnCorrectable,0) as UE," & _ "isnull(AB.Correctable,0) as CE, isnull(AB.SYS_Product,'" & Field & "'" & _ ") as SYS_Product from AHS_Dates D Left Join (select * from P_tot where " & _ "SYS_Product = '" & Field & "'" & _ " ) AB on AB.Year_=D.Year_ and AB.Week=D.Week_Num order by YRWk" FetchData2.Open strSQLCustomers, openConnection, adOpenStatic, adLockReadOnly If FetchData2.RecordCount > 0 Then **objExcel.ActiveWorkbook.Connections.Add SheetName, "", _ Array(Array( _ "ODBC;DRIVER=SQL Server Native Client 10.0;SERVER=" & sServerIP & ";TimeOut=5000000; Trusted_Connection=Yes;Integrated Security=SSPI;" _ ), Array("DATABASE=" & sDataBaseName & ";")), Array(strSQLCustomers), 2** objExcel.ActiveWorkbook.PivotCaches.Create(SourceType:=xlExternal, SourceData:= _ objExcel.ActiveWorkbook.Connections(SheetName), Version:= _ xlPivotTableVersion14).CreatePivotTable TableDestination:=objExcel.Sheets(SheetNumber).Name & "!R3C7", _ TableName:="PivotTable" & SheetNumber, DefaultVersion:=xlPivotTableVersion14 Set ws = objExcel.ActiveWorkbook.Worksheets(objExcel.Sheets(SheetNumber).Name) objExcel.Cells(3, 7).Select ws.Shapes.AddChart.Select objExcel.ActiveWorkbook.ActiveChart.ChartType = xlAreaStacked objExcel.ActiveWorkbook.ActiveChart.SetSourceData Source:=ws.Range(objExcel.Sheets(SheetNumber).Name & "!$G$3:$I$20") With ws.PivotTables("PivotTable1").PivotFields("SYS_PRoduct") .Orientation = xlColumnField .Position = 1 End With With ws.PivotTables("PivotTable1").PivotFields("YRWk") .Orientation = xlRowField .Position = 1 End With ' With ws.PivotTables("PivotTable1").PivotFields("Year_") ' .Orientation = xlRowField ' .Position = 2 ' End With objExcel.ActiveWorkbook.ActiveChart.ChartTitle.Text = " Errors by Week and Year -ALLWEEKS" ws.PivotTables("PivotTable1").AddDataField ws.PivotTables( _ "PivotTable1").PivotFields("UE"), "Sum of UnCorrectable", xlSum ws.PivotTables("PivotTable1").AddDataField ws.PivotTables( _ "PivotTable1").PivotFields("CE"), "Sum of Correctable", xlSum End If ''MsgBox (FetchData2.RecordCount) FetchData2.Close I have used the same pivot chart + table in other slides. The problem I think is the query length My question: 1.Is there a better way for me to access the query results. Would appreciate the steps if any. 2.If I can make it a procedure how do I modify the pivot chart/table creation. Thanks. The query results with all 52 weeks: Week UE CE SYS_Product(or Field) 1 0 0 Gen8 2 0 0 Gen8 3 0 0 Gen8 4 0 0 Gen8 5 0 0 Gen8 6 0 0 Gen8

    Read the article

  • Can I avoid a threaded UDP socket in Pyton dropping data?

    - by 666craig
    First off, I'm new to Python and learning on the job, so be gentle! I'm trying to write a threaded Python app for Windows that reads data from a UDP socket (thread-1), writes it to file (thread-2), and displays the live data (thread-3) to a widget (gtk.Image using a gtk.gdk.pixbuf). I'm using queues for communicating data between threads. My problem is that if I start only threads 1 and 3 (so skip the file writing for now), it seems that I lose some data after the first few samples. After this drop it looks fine. Even by letting thread 1 complete before running thread 3, this apparent drop is still there. Apologies for the length of code snippet (I've removed the thread that writes to file), but I felt removing code would just prompt questions. Hope someone can shed some light :-) import socket import threading import Queue import numpy import gtk gtk.gdk.threads_init() import gtk.glade import pygtk class readFromUDPSocket(threading.Thread): def __init__(self, socketUDP, readDataQueue, packetSize, numScans): threading.Thread.__init__(self) self.socketUDP = socketUDP self.readDataQueue = readDataQueue self.packetSize = packetSize self.numScans = numScans def run(self): for scan in range(1, self.numScans + 1): buffer = self.socketUDP.recv(self.packetSize) self.readDataQueue.put(buffer) self.socketUDP.close() print 'myServer finished!' class displayWithGTK(threading.Thread): def __init__(self, displayDataQueue, image, viewArea): threading.Thread.__init__(self) self.displayDataQueue = displayDataQueue self.image = image self.viewWidth = viewArea[0] self.viewHeight = viewArea[1] self.displayData = numpy.zeros((self.viewHeight, self.viewWidth, 3), dtype=numpy.uint16) def run(self): scan = 0 try: while True: if not scan % self.viewWidth: scan = 0 buffer = self.displayDataQueue.get(timeout=0.1) self.displayData[:, scan, 0] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 1] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 2] = numpy.fromstring(buffer, dtype=numpy.uint16) gtk.gdk.threads_enter() self.myPixbuf = gtk.gdk.pixbuf_new_from_data(self.displayData.tostring(), gtk.gdk.COLORSPACE_RGB, False, 8, self.viewWidth, self.viewHeight, self.viewWidth * 3) self.image.set_from_pixbuf(self.myPixbuf) self.image.show() gtk.gdk.threads_leave() scan += 1 except Queue.Empty: print 'myDisplay finished!' pass def quitGUI(obj): print 'Currently active threads: %s' % threading.enumerate() gtk.main_quit() if __name__ == '__main__': # Create socket (IPv4 protocol, datagram (UDP)) and bind to address socketUDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) host = '192.168.1.5' port = 1024 socketUDP.bind((host, port)) # Data parameters samplesPerScan = 256 packetsPerSecond = 1200 packetSize = 512 duration = 1 # For now, set a fixed duration to log data numScans = int(packetsPerSecond * duration) # Create array to store data data = numpy.zeros((samplesPerScan, numScans), dtype=numpy.uint16) # Create queue for displaying from readDataQueue = Queue.Queue(numScans) # Build GUI from Glade XML file builder = gtk.Builder() builder.add_from_file('GroundVue.glade') window = builder.get_object('mainwindow') window.connect('destroy', quitGUI) view = builder.get_object('viewport') image = gtk.Image() view.add(image) viewArea = (1200, samplesPerScan) # Instantiate & start threads myServer = readFromUDPSocket(socketUDP, readDataQueue, packetSize, numScans) myDisplay = displayWithGTK(readDataQueue, image, viewArea) myServer.start() myDisplay.start() gtk.gdk.threads_enter() gtk.main() gtk.gdk.threads_leave() print 'gtk.main finished!'

    Read the article

  • Applescript: cleaning a string

    - by Mike
    I have this string that has illegal chars that I want to remove but I don't know what kind of chars may be present. I built a list of chars that I want not to be filtered and I built this script (from another one I found on the web). on clean_string(TheString) --Store the current TIDs. To be polite to other scripts. set previousDelimiter to AppleScript's text item delimiters set potentialName to TheString set legalName to {} set legalCharacters to {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "1", "2", "3", "4", "5", "6", "7", "8", "9", "0", "?", "+", "-", "Ç", "ç", "á", "Á", "é", "É", "í", "Í", "ó", "Ó", "ú", "Ú", "â", "Â", "ã", "Ã", "ñ", "Ñ", "õ", "Õ", "à", "À", "è", "È", "ü", "Ü", "ö", "Ö", "!", "$", "%", "/", "(", ")", "&", "€", "#", "@", "=", "*", "+", "-", ",", ".", "–", "_", " ", ":", ";", ASCII character 10, ASCII character 13} --Whatever you want to eliminate. --Now iterate through the characters checking them. repeat with thisCharacter in the characters of potentialName set thisCharacter to thisCharacter as text if thisCharacter is in legalCharacters then set the end of legalName to thisCharacter log (legalName as string) end if end repeat --Make sure that you set the TIDs before making the --list of characters into a string. set AppleScript's text item delimiters to "" --Check the name's length. if length of legalName is greater than 32 then set legalName to items 1 thru 32 of legalName as text else set legalName to legalName as text end if --Restore the current TIDs. To be polite to other scripts. set AppleScript's text item delimiters to previousDelimiter return legalName end clean_string The problem is that this script is slow as hell and gives me timeout. What I am doing is checking character by character and comparing against the legalCharacters list. If the character is there, it is fine. If not, ignore. Is there a fast way to do that? something like "look at every char of TheString and remove those that are not on legalCharacters" ? thanks for any help.

    Read the article

  • Run CGI in IIS 7 to work with GET without Requiring POST Request

    - by Mohamed Meligy
    I'm trying to migrate an old CGI application from an existing Windows 2003 server (IIS 6.0) where it works just fine to a new Windows 2008 server with IIS 7.0 where we're getting the following problem: After setting up the module handler and everything, I find that I can only access the CGI application (rdbweb.exe) file if I'm calling it via POST request (form submit from another page). If I just try to type in the URL of the file (issuing a GET request) I get the following error: HTTP Error 502.2 - Bad Gateway The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Exception EInOutError in module rdbweb.exe at 00039B44. I/O error 6. ". This is a very old application for one of our clients. When we tried to call the vendor they said we need to pay ~ $3000 annual support fee in order to start the talk about it. Of course I'm trying to avoid that! Note that: If we create a normal HTML form that submits to "rdbweb.exe", we get the CGI working normally. We can't use this as workaround though because some pages in the application link to "rdbweb.exe" with normal link not form submit. If we run "rdbweb.exe". from a Console (Command Prompt) Window not IIS, we get the normal HTML we'd typically expect, no problem. We have tried the following: Ensuring the CGI module mapped to "rdbweb.exe".in IIS has all permissions (read, write, execute) enabled and also all verbs are allowed not just specific ones, also tried allowing GET, POST explicitely. Ensuring the application bool has "enable 32 bit applications" set to true. Ensuring the website runs with an account that has full permissions on the "rdbweb.exe".file and whole website (although we know it "read", "execute" should be enough). Ensuring the machine wide IIS setting for "ISAPI and CGI Restrictions" has the full path to "rdbweb.exe".allowed. Making sure we have the latest Windows Updates (for IIS6 we found knowledge base articles stating bugs that require hot fixes for IIS6, but nothing similar was found for IIS7). Changing the module from CGI to Fast CGI, not working also Now the only remaining possibility we have instigated is the following Microsoft Knowledge Base article:http://support.microsoft.com/kb/145661 - Which is about: CGI Error: The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: the article suggests the following solution: Modify the source code for the CGI application header output. The following is an example of a correct header: print "HTTP/1.0 200 OK\n"; print "Content-Type: text/html\n\n\n"; Unfortunately we do not have the source to try this out, and I'm not sure anyway whether this is the issue we're having. Can you help me with this problem? Is there a way to make the application work without requiring POST request? Note that on the old IIS6 server the application is working just fine, and I could not find any special IIS configuration that I may want to try its equivalent on IIS7.

    Read the article

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

  • Ubuntu 10.10 Ad-Hoc Setup (from Wireless Router, to Ubuntu Server/Desktop to Wireless Router)

    - by user60375
    Okay, so I know there are different approaches for this, but I will explain my story briefly before getting to the technical stuff. My fiancée and I are going through some financial issues (as I assume a lot of us are). We ended up having to move from our house and stay with some friends/family for 6 months, just to get ourselves caught up. (Medical bills, among other issues,etc). So this is where it gets fun. At our friends house, we are staying in the loft setup which is not near the cable modem and wireless router. I have a "hand-crafted" media center running XBMC, an Ubuntu 10.10 Server/Desktop (multi-purpose, very powerful and tons drive space), two working laptops, a between the two of us we have multiple wireless devices/phones. Now our friends Wireless router doesn't have any options for assigning IP addresses, but my router does. My current setup is: Friends Cable Modem -- Friend's Wireless Router -- Ubuntu 10.10 Server -- My Wireless Router (local-link from Friend's wireless (incoming) to sharing connection on ETH0 (outgoing)) -- to all devices. (Wireless Modem, Ubuntu Server that share's it's wireless incoming connection to the ethernet port my Wireless router share's with the rest of the devices). I setup my router to use default settings from my friend's router, using Google's DNS on my router (disabled DNS setup on Ubuntu Server), everything is assigned nicely and runs smooth. My Ubuntu server was given the address 10.42.43.1 (assuming standard from Network-Manager). (On the Ubuntu machine that shares to my wireless router; I have some server apps installed, but mainly just use Samba/NFS/Tangerine action. My problem/goal is that every device has no problem of accessing the internet from my router, the media-center has an assigned ip address, all services from all devices (ZeroConf, Avanhi, Bonjour, GIT, SSH, FTP, Apache2, etc) all work correctly except from my Ubuntu Server (which serves the wireless connection to ETH0 to another Wireless Router). The Ubuntu 10.10 Server/Desktop is not broadcasting anything (the Zeroconf Service Discovery 0.4 Gnome Applet shows the services from the Ubuntu server but no other computers can see them). I can access it from my Media-Center (Running Xbuntu 10.04) if I direct it to 10.42.43.1, no problem. But I cannot access Tangerine (Daapd) and the Samba shares do not show up on any computers for 10.42.43.1 (not in the WORKGROUP which Samba is setup simple and default but I can direct computers to that address and the shares will add except on a damn Windows 7 parition). Is this an issue with how I have my router setup and possible the gateway? An issue with Network-Manager? And issue with my Ubuntu Server/Desktop? I know there is a lot to that, but it's simpler than I probably have explained? Any help would be appreciated. If you need more details, I can provide them. If there is a better way of my attempting this home-network, please let me know. Thanks in advance for the help.

    Read the article

  • Configuring Cisco 877W router from scratch for DHCP, WiFi, ADSL2+, NAT

    - by David M Williams
    Hi all, I apologise if this is a BIG question but I am quite lost with the Cisco IOS. I know what I want to achieve just not how to do it :( I have a Cisco 877W router with 4 FastEthernet interfaces, 1 ATM interface and 1 802.11 Radio. I want to set it up for a small network and am trying to construct a configuration below. I was using Google to try and flesh it out but I think I need help and guidance from actual experts! If it helps, output from show ver says Cisco IOS software, C870 software (C870-ADVSECURITYK9-M), version 12.4(4)T7, release software (fc1) ROM: System bootstrap, version 12.3(8r)YI4, release software Here's what I have so far, which hopefully outlines clearly enough what I am wanting to do. The bits in angle brackets are placeholders (eg the secret password). ! ! Set router hostname ! hostname Shazam ! ! Set usernames and passwords ! username david privilege 15 secret 0 <PASSWORD> enable secret <SECRETPASSWORD> ! ! Configure SSH and telnet access ! line vty 0 4 privilege level 15 login local transport input telnet ssh ! ! Local logging ! logging buffered 51200 warning ! ! Set date and time for NSW, Australia (GMT +10h) ! ! ! Set router IP address to 192.168.1.1 on FastEthernet0 port ! interface FastEthernet0 ip address 192.168.1.1 255.255.255.0 no shut ip nat inside ! ! Forward any unknown DNS requests to Google ! ip dns server ip name-server 8.8.8.8 ip name-server 8.8.4.4 ! ! Set up DHCP ! DHCP pool covers 192.168.1.100 - .199 ! Set gateway and DNS server to be the router, ie 192.168.1.1 ! service dhcp ip routing ip dhcp excluded-address 192.168.1.1 192.168.1.99 ip dhcp excluded-address 192.168.1.200 192.168.1.255 ip dhcp pool <DHCPPOOLNAME> network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 192.168.1.1 lease 7 ! ! DHCP reservations ! ! Assign IP address 192.168.1.105 to MAC address 00-21-5D-2F-58-04 ! ! Configure ADSL2 connection details ! interface atm dsl operating-mode adsl2+ ! ! Set up NAT rules ! ! Forward port 35394 to 192.168.1.105 ! ! Set up WiFi ! ! SSID visible, WPA2 security, Pre-shared key I'm hoping most of this is boiler-plate stuff to you guys. I'm keen to not just get a working script but to actually understand it also. Unfortunately, I'm finding the Cisco reference material online very complex. Thank you!

    Read the article

  • Windows Server: specifying the default IP address when NIC has multiple addresses

    - by Cédric Boivin
    I have a Windows Server which has ~10 IP addresses statically bound. The problem is I don't know how to specify the default IP address. Sometimes when I assign a new address to the NIC, the default IP address changes with the last IP entered in the advanced IP configuration on the NIC. This has the effect (since I use NAT) that the outgoing public IP changes too. Even though this problem is currently on Windows Server 2008, I've had the same problem with a Windows Server 2003. How can you set the default IP address on a NIC when it has multiple IP addresses bound? I remark something. When i check route print i see is it there the problem ? 0.0.0.0 0.0.0.0 192.168.x.x 192.168.99.49 261 if i want the default ip be example : 192.168.99.100 There is my route. Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.99.1 192.168.99.49 261 10.10.10.0 255.255.255.0 On-link 10.10.10.10 261 10.10.10.10 255.255.255.255 On-link 10.10.10.10 261 10.10.10.255 255.255.255.255 On-link 10.10.10.10 261 192.168.99.0 255.255.255.0 On-link 192.168.99.49 261 192.168.99.49 255.255.255.255 On-link 192.168.99.49 261 192.168.99.51 255.255.255.255 On-link 192.168.99.49 261 192.168.99.52 255.255.255.255 On-link 192.168.99.49 261 192.168.99.53 255.255.255.255 On-link 192.168.99.49 261 192.168.99.54 255.255.255.255 On-link 192.168.99.49 261 192.168.99.55 255.255.255.255 On-link 192.168.99.49 261 192.168.99.56 255.255.255.255 On-link 192.168.99.49 261 192.168.99.57 255.255.255.255 On-link 192.168.99.49 261 192.168.99.58 255.255.255.255 On-link 192.168.99.49 261 192.168.99.59 255.255.255.255 On-link 192.168.99.49 261 192.168.99.60 255.255.255.255 On-link 192.168.99.49 261 192.168.99.61 255.255.255.255 On-link 192.168.99.49 261 192.168.99.62 255.255.255.255 On-link 192.168.99.49 261 192.168.99.64 255.255.255.255 On-link 192.168.99.49 261 192.168.99.65 255.255.255.255 On-link 192.168.99.49 261 192.168.99.66 255.255.255.255 On-link 192.168.99.49 261 192.168.99.67 255.255.255.255 On-link 192.168.99.49 261 192.168.99.68 255.255.255.255 On-link 192.168.99.49 261 192.168.99.70 255.255.255.255 On-link 192.168.99.49 261 192.168.99.71 255.255.255.255 On-link 192.168.99.49 261 192.168.99.100 255.255.255.255 On-link 192.168.99.49 261 192.168.99.108 255.255.255.255 On-link 192.168.99.49 261 192.168.99.109 255.255.255.255 On-link 192.168.99.49 261 192.168.99.112 255.255.255.255 On-link 192.168.99.49 261 192.168.99.255 255.255.255.255 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 10.10.10.10 261 255.255.255.255 255.255.255.255 On-link 192.168.99.49 261 255.255.255.255 255.255.255.255 On-link 10.10.10.10 261 I need to be certain, when i go on internet i go with the 192.168.99.100 how i do that ?

    Read the article

  • Windows Server: specifying the default IP address when NIC has multiple addresses

    - by Cédric Boivin
    I have a Windows Server which has ~10 IP addresses statically bound. The problem is I don't know how to specify the default IP address. Sometimes when I assign a new address to the NIC, the default IP address changes with the last IP entered in the advanced IP configuration on the NIC. This has the effect (since I use NAT) that the outgoing public IP changes too. Even though this problem is currently on Windows Server 2008, I've had the same problem with a Windows Server 2003. How can you set the default IP address on a NIC when it has multiple IP addresses bound? I remark something. When i check route print i see is it there the problem ? 0.0.0.0 0.0.0.0 192.168.x.x 192.168.99.49 261 if i want the default ip be example : 192.168.99.100 There is my route. Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.99.1 192.168.99.49 261 10.10.10.0 255.255.255.0 On-link 10.10.10.10 261 10.10.10.10 255.255.255.255 On-link 10.10.10.10 261 10.10.10.255 255.255.255.255 On-link 10.10.10.10 261 192.168.99.0 255.255.255.0 On-link 192.168.99.49 261 192.168.99.49 255.255.255.255 On-link 192.168.99.49 261 192.168.99.51 255.255.255.255 On-link 192.168.99.49 261 192.168.99.52 255.255.255.255 On-link 192.168.99.49 261 192.168.99.53 255.255.255.255 On-link 192.168.99.49 261 192.168.99.54 255.255.255.255 On-link 192.168.99.49 261 192.168.99.55 255.255.255.255 On-link 192.168.99.49 261 192.168.99.56 255.255.255.255 On-link 192.168.99.49 261 192.168.99.57 255.255.255.255 On-link 192.168.99.49 261 192.168.99.58 255.255.255.255 On-link 192.168.99.49 261 192.168.99.59 255.255.255.255 On-link 192.168.99.49 261 192.168.99.60 255.255.255.255 On-link 192.168.99.49 261 192.168.99.61 255.255.255.255 On-link 192.168.99.49 261 192.168.99.62 255.255.255.255 On-link 192.168.99.49 261 192.168.99.64 255.255.255.255 On-link 192.168.99.49 261 192.168.99.65 255.255.255.255 On-link 192.168.99.49 261 192.168.99.66 255.255.255.255 On-link 192.168.99.49 261 192.168.99.67 255.255.255.255 On-link 192.168.99.49 261 192.168.99.68 255.255.255.255 On-link 192.168.99.49 261 192.168.99.70 255.255.255.255 On-link 192.168.99.49 261 192.168.99.71 255.255.255.255 On-link 192.168.99.49 261 192.168.99.100 255.255.255.255 On-link 192.168.99.49 261 192.168.99.108 255.255.255.255 On-link 192.168.99.49 261 192.168.99.109 255.255.255.255 On-link 192.168.99.49 261 192.168.99.112 255.255.255.255 On-link 192.168.99.49 261 192.168.99.255 255.255.255.255 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 10.10.10.10 261 255.255.255.255 255.255.255.255 On-link 192.168.99.49 261 255.255.255.255 255.255.255.255 On-link 10.10.10.10 261 I need to be certain, when i go on internet i go with the 192.168.99.100 how i do that ?

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >