Search Results

Search found 30250 results on 1210 pages for 'good guy'.

Page 223/1210 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

  • 3Ware 9650SE RAID-6, two degraded drives, one ECC, rebuild stuck

    - by cswingle
    This morning I came in the office to discover that two of the drives on a RAID-6, 3ware 9650SE controller were marked as degraded and it was rebuilding the array. After getting to about 4%, it got ECC errors on a third drive (this may have happened when I attempted to access the filesystem on this RAID and got I/O errors from the controller). Now I'm in this state: > /c2/u1 show Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB) ------------------------------------------------------------------------ u1 RAID-6 REBUILDING 4%(A) - - 64K 7450.5 u1-0 DISK OK - - p5 - 931.312 u1-1 DISK OK - - p2 - 931.312 u1-2 DISK OK - - p1 - 931.312 u1-3 DISK OK - - p4 - 931.312 u1-4 DISK OK - - p11 - 931.312 u1-5 DISK DEGRADED - - p6 - 931.312 u1-6 DISK OK - - p7 - 931.312 u1-7 DISK DEGRADED - - p3 - 931.312 u1-8 DISK WARNING - - p9 - 931.312 u1-9 DISK OK - - p10 - 931.312 u1/v0 Volume - - - - - 7450.5 Examining the SMART data on the three drives in question, the two that are DEGRADED are in good shape (PASSED without any Current_Pending_Sector or Offline_Uncorrectable errors), but the drive listed as WARNING has 24 uncorrectable sectors. And, the "rebuild" has been stuck at 4% for ten hours now. So: How do I get it to start actually rebuilding? This particular controller doesn't appear to support /c2/u1 resume rebuild, and the only rebuild command that appears to be an option is one that wants to know what disk to add (/c2/u1 start rebuild disk=<p:-p...> [ignoreECC] according to the help). I have two hot spares in the server, and I'm happy to engage them, but I don't understand what it would do with that information in the current state it's in. Can I pull out the drive that is demonstrably failing (the WARNING drive), when I have two DEGRADED drives in a RAID-6? It seems to me that the best scenario would be for me to pull the WARNING drive and tell it to use one of my hot spares in the rebuild. But won't I kill the thing by pulling a "good" drive in a RAID-6 with two DEGRADED drives? Finally, I've seen reference in other posts to a bad bug in this controller that causes good drives to be marked as bad and that upgrading the firmware may help. Is flashing the firmware a risky operation given the situation? Is it likely to help or hurt wrt the rebuilding-but-stuck-at-4% RAID? Am I experiencing this bug in action? Advice outside the spiritual would be much appreciated. Thanks.

    Read the article

  • Microsoft Office documents collaboration - Open Source alternative

    - by Saggi Malachi
    I am looking for a good solution to collaborate on Microsoft Office documents, we currently just edit directly on a Samba share but it's one big mess because sometimes people leave the office with their laptops while docs are open so swap files remain there and then you nobody is sure what's going on. Is there any good and simple open source solution based on Linux? I've tried Alfresco but it is much more than what I need, we got an internal wiki for most collaboration and I just need some solution for the stuff we need to do in Microsoft Office (mostly Excel files, the rest is in the wiki) EDIT: Some more info as requested - we are very small group, 4 full time employees and a few freelancers. The best idea I've got so far is just managing it in a subversion repository with a Lock-Modify-Lock policy but I'd love to hear about better solutions. Thanks!

    Read the article

  • Intel® Core™2 Duo Desktop Processor vs Intel® Core™ i3 Desktop Processor?

    - by metal gear solid
    Intel® Core™2 Duo Desktop Processor vs Intel® Core™ i3 Desktop Processor? Which CPU is better to buy ? Intel® Core™ i3-530 Processor (4M Cache, 2.93 GHz) (it supports DDR3 also) or Intel® Core™2 Duo Processor E7500 (3M Cache, 2.93 GHz, 1066 MHz FSB) (it supports DDR2 only ) Although I do not play games on my PC but I need good performance in Adobe Photoshop, Watching Full HD Movies. I need good performance in Multitasking. Along with any of these CPU I would purchase 2 GB x 2 stick of RAM. and I will use Windows 7. and I will use Microsoft VPC images also with MS Virtual PC.

    Read the article

  • Does this laptop have high enough specifications for gaming? [closed]

    - by Grant
    Here's the laptop It wouldn't be hardcore gaming, mostly things like the new Deus Ex game, Mirror's Edge, Portal 2, etc... I need to replace my current, broken, laptop and I thought this would be a good opportunity to get to play some of these games. My current laptop is really only lacking in its graphics card. (Intel series 4 chipset) If this laptop isn't good enough, I would really appreciate suggestions. I won't be able to get a desktop, otherwise I would, and I can't spend more than $1000 dollars on my new laptop.

    Read the article

  • Should I be afraid of Linux server administration?

    - by markle976
    I've been trying to figure out what to focus on. I finally realized that the root of my quandary is that I am unsure about learning Linux server administration. I have been getting pretty good with PHP/MySQL and web development, but I am not very familiar with Linux. Is it hard to learn? What would I need to know in order to manage a LAMP stack? Also, which version is most used in enterprises? I think I have also hesitated to dive in because it seems like it is mostly used in small companies, but I guess that could be a good thing.

    Read the article

  • JMeter Stress testing

    - by mcondiff
    MAMP server hosting a Joomla instance. I'd like to hear the community's thoughts on the best way to stress test the server and find it's breaking point on concurrent users etc. Currently I have setup a test plan which I have going to the home page, grabbing the index.php, css, js and all images and have run tests on 1 to 100 users and a varying number of loops. What I'd like to know is how do I determine at what number of concurrent requests or looping requests is a good way to gauge if my server can handle the proposed increase in traffic? What is a good KB/sec, Throughput, Average, Max, Min via the Aggregate Report and at what number of threads/loops etc? I have googled and have not found immediate answers to these questions and thought to come here. More or less I have just used this http://jakarta.apache.org/jmeter/usermanual/jmeter_proxy_step_by_step.pdf to guide me and then I have been winging it in terms of Thread and Loop numbers. Any light shed on these subject would be much appreciated.

    Read the article

  • Feedback request: Linode for Magento hosting?

    - by Ernest
    Hello, i install and customize a self made application for web pages which uses spring+jstl and the admin uses flex. Also i want to add to my offers an ecommerce cms, so i have decided to start with magento. I'm using daily razor vps, however i was thinking in switching to linode because of the price. My questions are: 1.- Is linode good for magento ? 2.- is linode good for java+tomcat+jstl+mysql apps ? 3.- What are the most demanding resources to magento ? CPU ? RAM ? Does it consumes more if i have more products? Well i hope to learn from you, this page is always so helpful UPDATE: A final question: 4.- Do you recomend another vps service for my requieriments ?

    Read the article

  • Exchange Transport Service Started but not working

    - by Philippe
    Good day, Here is the problem : We are hosting a Microsoft Exchange Server. Everything working fine until recently, where the mail transport seems to go wrong. We almost have to restart the service every morning. The thing is that the transport service is started, but the mail are not delivered to the users and senders to our server get a delayed delivery notification. When we restart the service, all the mail is then delivered to the users and we're good to go for a day or two. Things I've noticed : The store service is growing to around 6 Gb of used RAM, and the w3wp.exe service is hanging around 700mb RAM. Is there a way to schedule a restart of the transport role every 4 hours or something while I'm solving the issue so I don't have to worry when I leave for the week-end? And most of all...does anyone have any idea on how to solve this issue? Thanks, Philippe

    Read the article

  • SSD for Visual studio : Intel X25-m G2 or OCZ Vertex

    - by meska
    hi, looking for laptop upgrade, and thinking about SSD drive. Laptop would be Dell Studio XPS with T9400 as i can get one for a very good price. So i was thinking about replacing the built in 500 GB hdd with some kind of SDD Currently two choices: OCZ Vertex Series 60 GB vs. Intel X25-M G2 MLC 80 GB Price is almost the same. From anandtech Intel performs badly in sequential write, but scores very high numbers in random write and random read (good for visual studio?) and i get more space. What do you think? OCZ Vertex or Intel X25-M G2 ? Purely from Visual Studio 2008 and upcoming 2010 perspective?

    Read the article

  • NetApp and SQL Server?

    - by Edinor
    Do you have any good or bad experiences to share running SQL Server OLTP Systems on NetApp appliances? I have been working with a small, relatively low-volume cluster with a lower-end NetApp device, and I have found the environment to be generally unstable, at least compared to my experiences with other SANs, iSCSI arrays, and DAS setups. I struggle to believe that RAID DP and WAFL are more than fairy-dust technologies. A solution has been proposed to me that I just need a bigger, better NetApp, with PAM cards and other cool technology I've not heard of, and I feel like I would be better off spending a quarter of that on good direct-attached drives and a beefy server. At the same time, I feel that an Enterprise-class SAN should be something I can count on to be consistently a more stable, better performer than the less expensive solution I might propose. Are you a SQL Server DBA in an OLTP environment and love your NetApp? If you don't like them, why not?

    Read the article

  • How to securely generate memorable passwords?

    - by Tim
    Whenever I need new passwords I use some tools to generate those, preferable memorable passwords, but I've been wondering how secure this might actually be. Using The xkcd random number generator is probably pretty bad, cat /dev/random is probably pretty good, but generating memorable passwords seems a bit more tricky. Whenever a program generates a memorable password, it only uses a subset of the total password space available, and it is not clear to me how big this space is. Of course a long password should help in this case, but if the `memorable' part of the program is too predictable, your passwords are not very good in the end. TL;DR: how secure are memorable password generators, given the fact that `memorable' passwords are a subset of total password space? Some tools I know of: pwgen -- seems ok, but passwords are not too memorable Mac Password Assistant - generates memorable passwords but it is unclear to me how this works.

    Read the article

  • CORAID using only 1 of the 2 available NICs for AoE traffic

    - by Peter Carrero
    We got 6 CORAID shelves in my workplace. On 2 of them I see AoE traffic on only 1 of the 2 NICs that are attached to the SAN switch. We got jumbo frames enabled on all devices. Both NICs show up when I issue the aoe-interfaces command. This wouldn't bother me too much if the throughput performance observed on the "bad" shelves using bonnie++ wasn't half of the result of the "good" shelves. The "good" shelves are older SR1521 model and they have ReiserFS on their LUNS - not that I think it makes a difference - and the "bad" shelves are newer SR2421 model and have JFS. Any help as to what is going on and how to rectify this would be greatly appreciated. BTW: even the lower performing shelves outperform another iSCSI device we got, but that is another story... Thanks.

    Read the article

  • Dedicated NIC or dedicated port for iSCSI?

    - by Newt
    When spec'ing and configuring a machine that will utilise shared iSCSI storage, I've read a lot of documentation which suggests a dedicated network adapter should be used for iSCSI communication. That makes a lot of sense and I have no problem with it. The question I do have, is this - should that suggestion be taken to mean that a separate physical NIC should be used, or will a dedicated port/ports on a dual/quad port NIC be just as good? My suspicion is that simply using dedicated port(s) on a shared NIC would be just as good. Any input greatly appreciated.

    Read the article

  • Which linux distro for a laptop in windows environmet?

    - by Dev er dev
    I just got new laptop at work. What would you recommend as a linux distribution for it? All other developers are working under windows, and use windows tools. I'm currently using ArchLinux, but want to change it. I don't want to waste time configuring wireless, windows network shares, network pritners, projector, etc ... I want this stuff to just work, while still having sane and stable development evironment and tools. Is Ubuntu a good choice for this? I use gentoo at home, but don't think it is a good match for work environmet. EDIT: Note that we are working on cross platform apps, and deployment platform is almost always linux. There are very few windows apps that I have to use (like MS Project). It is just that everything else is windows centric. I use linux because I feel more productive with it, even if I have to dual boot to edit MS Project files.

    Read the article

  • Wireless Bridge with NetGear and TP-Link

    - by Tiago Cruz
    I have a wireless NetGear WGR614 v7 (little old) router connected to the internet, but I can't get a good signal in the other end of my house. I have another new one, model TP-Link TL-WR941ND wireless router. I was able to do the stuff works using a wired cable, but now, I would like to do the same using wireless connections (bridge mode, some like WDS?) Now, the computer connected to TP LINK was able to ping my computer connected to NETGEAR, but we cannot go IP ADDRESS outside my network, only internals ones. What can I do to configure this? Is needed that BOTH wireless routers support BRIDGE mode or only one its good enough? Thanks a lot!!

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • Dvorak hotkey remapping in vim, worth it?

    - by Bryan Ward
    I've been trying to learn the dvorak keyboard layout of late and I have been making some good progress this time around. The trouble I am finding now is that all of my hotkeys are all in the wrong places. As a vim user this is particularly troubling. I have found good resources to switch the bindings back so that they are in the places in vim, but I wonder if this is worth it. I also use set -o vi in my ~/.zshrc file so that I can use the familiar bindings in the terminal as well. hjkl navigation is also featured in a number of other applications such as less. For those of you out there who have successfully made the switch, is it worth remapping things to be familiar again, or is it better in the long run to just deal with weirdly placed hotkeys?

    Read the article

  • my Website loss packet in 70% countries, how can i dertermine why its loss packets?

    - by user2511667
    I checked my website on google page speed tester, it show result 90/100. I checked my website on pingdom it shows good result there. When i check my website in cloudmonitor.ca.com, it shows good result in 30% countries and all other countries it show packet loss (100%) How we can determine why my website has packet loss? And what is its solution? Is this problem from my server or from my website? I created new html blank page and set it too my index page, after I tested, it still shows packet loss, guess this means the problem is not in my website. Here is live result When I visit my website in browser, website is working fine. But when i test my domain or IP 198.178.123.219 in command Prompt it shows "Request time out" Why time out in command prompt?

    Read the article

  • Tools to test multicast routing

    - by Zoredache
    I am looking for a good simple tool that runs on a standard OS (Windows or Linux) that I can used to test that multicast is being passed properly by a router. I have been asked by a client to enable multicast routing on a Linux box acting as their router since their phone system requires multicast to for a few features. Since I am not physically near the client I don't really have the ability to experiment with the various methods for setting up multicast routing on Linux. I can setup a router at my desk that is identical to what is deployed on their network, but I don't know of any good simple tools that I can use to generate or listen for multicast traffic. The one mulicast tool I have found is mcast.exe tool which is part of the Windows 2000/2003 resource kit. From what I have read online it seems that mcast.exe does not work across a router, and only works on the local network, so that doesn't seem to be useful for me to test multicast routing. So what do tool(s) do you use to test that multicast routing is properly setup?

    Read the article

  • What is your best travel (portable) WiFi router?

    - by Sorin Sbarnea
    I'm looking for a small WiFi router that I could use to create my own wireless network when I travel. I'm also interested in a router that has very good signal/antenna and that is easy to take with you. Please do not recommend products just because you can find them on the net, I'm interested about personal experience. Also it's good to know if you can install open-source firmware on it. Selection criteria: size signal quality price Required: to support both 110V/220V power supply, best if embedded or retractable cord.

    Read the article

  • Black Screen when Computer Boots

    - by BlueRaja
    I'm having a serious problem with my computer; I think I've narrowed it down to the motherboard, but I'd like a second opinion before I spend the money. Before I moved into my new apartment, my desktop was working fine; now, it just won't work. It will turn on, the fans will spin up, lights come on... but nothing appears on the screen. No POST, nothing. I've tried: A different monitor (both are VGA) A different video-card (both are DVI, PCIe) Three different, known-good VGA-DVI adapters The onboard video port (VGA) Reseating the memory, and trying only one stick Different, known-good wall-outlets Unplugging the HDD and CD-drive from both the motherboard and PSU Replacing the PSU Has anyone had this happen before? Perhaps it's a known problem with this motherboard? Any advice!? Here are my specs: A13G+ V3.0 motherboard 2 2-gig 800mhz DDR2 600-watt PSU two older Geforce video cards

    Read the article

  • Replicate / SYNC / Copy thousands of files

    - by rihatum
    Windows 32BIT Box (Server OS) 4GB RAM a 800GB LUN Mapped to the above box as a local drive Around 700GB of text files (Yes text files and a few thousand word documents) nested in thousands of directories. I need to move this to a new storage and map another server to it. What would be the best way to go about it ? What I did was mapped the existing LUN to our new box and mapped a LUN from the New storage to the new box too, and tried copying (windows copy) but that wasn't good / fast enough considering the downtime. I am now looking for either a script which will do this or a utility (prefer opensource / free) to move this size of data at a good speed. 2 x 1GB Nics teamed ether channel 2GBPs Any suggestions or pointers would be off great help Thanks !

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >