Search Results

Search found 147 results on 6 pages for 'paranoid'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • OSX - Update "Java for OS X 2012-002" is not mentioned on support.apple.com, is this ok?

    - by snies
    Straight after installing "Java for OS X 2012-001" Software Update asks me to install "Java for OS X 2012-002", which has the exact same size (66.6 MB) and description (including the same two links: HT5055 and HT1222) as the former, which strikes me as odd. The "Java for OS X 2012-001" is described on the apple support pages, but the "Java for OS X 2012-002" is not mentioned anywhere. Also searching on google does not yield any usable results. What is your opinon? Am i paranoid? Did you also see this update?

    Read the article

  • script to test mail server

    - by WebDude
    Ever since a windows update that took down my IIS6 mail server a few weeks back, I've been really paranoid about my mail server working. So every time I run a windows update I fire up command prompt and send myself a quick test mail. Like so: > telnet localhost 25 > helo domain.com > mail from: [email protected] > rcpt to: [email protected] > data some random body to mail myself . This is a realy great way to test my mail server, but it's a pain in the neck to do quickly. Is there anyway i can run this in a batch script or something as a quick test? I've tried a bat file but this just waits after i call telnet I've also explored if telnet accepts any input files and it does not seem to. What's the best way to do this?

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • Security considerations in providing VPN access to non-company issued computers [migrated]

    - by DKNUCKLES
    There have been a few people at my office that have requested the installation of DropBox on their computers to synchronize files so they can work on them at home. I have always been wary about cloud computing, mainly because we are a Canadian company and enjoy the privacy and being outside the reach of the Patriot Act. The policy before I started was that employees with company issued notebooks could be issued a VPN account, and everyone else had to have a remote desktop connection. The theory behind this logic (as I understand it) was that we had the potential to lock down the notebooks whereas the employees home computers were outside of our grasp. We had no ability to ensure they weren't running as administrator all the time / were running AV so they were a higher risk at being infected with malware and could compromise network security. With the increase in people wanting DropBox I'm curious as to whether or not this policy is too restrictive and overly paranoid. Is it generally safe to provide VPN access to an employee without knowing what their computing environment looks like?

    Read the article

  • How can I restrict the backuppc client user as much as possible? (rsync)

    - by jxn
    I have backuppc making full backups of servers, but I'd like to be sure that my set up is as paranoid as possible. BackupPC is set up to backup via rsync, and it is set up to use a specific user on each client to be backed up. Because the backuppc client user has to have access to every file on the client machine and the ability to ssh into the machine without an interactive password, I'm a little nervous about securing the clients, and I'd like to know I haven't overlooked any options. Here's what I have in place: in the client user's authorized_keys file, i've included from="IPTOSERVER",command="/usr/bin/rsync" before the user's public key, so that the user can only login coming from the BackupPC server. Next, in the sudoers file, I've added this line: backuppc ALL=NOPASSWD: /usr/bin/rsync to allow root-level permissions only for the rsync command for that user. Are there other user, policy, or ssh restrictions that I can add while still allowing the backup pc client user to rsync all files?

    Read the article

  • How secure is using "Normal password" for SMTP with connection type = STARTTLS?

    - by harshath.jr
    I'm using an email client for the first time - for the most part I've always used gmail via the web interface. Now I'm setting up thunderbird to connect to an email server of my own (on my own server, own domain name, etc). The server machine (and the email server on it) was preconfigured for me. Now i figured out away by which I'm able to send and receive email, but I noticed that in the outgoing and incoming servers section, the connection type was STARTTLS (and not SSL/TLS), and the Authentication Type was "Normal Password". Does this mean that the password will be sent across in plain text? I'm very paranoid about security - its the only way that it works for me. Can someone please post links that explain how SMTP (my outbound server) and IMAP (my inbound server) servers work, and what connection type means what? Thanks! PS: If this question does not belong here, please redirect me.

    Read the article

  • Linux live cd with Broadcom Wi-fi support

    - by paul simmons
    I am looking for a live distro that has out of the box Broadcom wireless support. I am pretty happy with my Ubuntu installation and as long as I have an ethernet connection first time installed, I can install Broadcom drivers over internet. But being a little paranoid, I make my secure operations (banking etc.) with a live cd and zero hard disk access, so nothing is recorded. So far I plug ethernet to do such things with the live cd, but it would be nice if I can do same thing with wireless.

    Read the article

  • Can I use my Belkin router as a repeater?

    - by Kyle R
    I've got a brand new, unopened Belkin N600 DB router (yay 50% off sale), and I can't seem to figure out if I can use it as a repeater. As far as I can tell, it doesn't support DD-WRT or Tomato, but I'm wondering if there may be some way to rig it up to repeat Wifi. Also, I really need to know before I open it, because, being a bit tight on money at the moment, I'll probably sell it if I can't use it as a repeater. Finally, two caveats: I can't use the other wireless router for anything other than broadcasting straight from the modem (parents are paranoid about anything technological). I can't run ethernet to the second router due to the layout of the house. Thanks so much for the help everyone--if I'm being vague, tell me! EDIT: Also, the manual here is pretty useless on this topic as far as I can tell.

    Read the article

  • Windows Explorer - How can an large file have a zero "Size on disk" value? What does it mean

    - by Jaans
    I would expect some discrepancy between "Size" and "Size on disk" in Windows Explorer due to file system allocations etc. Below is a screenshot of an example file on a Windows 2012 R2 file server that has a 81.4 MB "Size" but for the "Size on disk" it's 0 bytes. What gives? I have other files doing the same, but yet another set of files and folders behaving as expected showing the size on disk relatively close to the actual file size. The volume is a basic disk, formatted with NTFS and the default 4K allocation units. No compression is set for any file or folder on the volume. (For those more paranoid, I did a malware scan, and also confirmed there is not ADS streams associated with the file in question). The user account running Windows Explorer is the domain administrator, and the file owner is also the domain administrator. Thanks for reading!

    Read the article

  • A unique identifier for a Domain

    - by jchoover
    I asked a question over on StackOverflow and was directed to ask a related one here to see if I could get any additional input. Basically, I am looking to have my application aware of what domain it's running under, if any at all. (I want to expose certain debugging facilities only in house, and due to our deployment model it isn't possible to have a different build.) Since I am over paranoid, I didn't want to just rely on the domain name to ensure we are in house. As such I noted the DOMAIN_CONTROLLER_INFO (http://msdn.microsoft.com/en-us/library/ms675912(v=vs.85).aspx ) returned from DsGetDcName (http://msdn.microsoft.com/en-us/library/ms675983(v=vs.85).aspx) has a GUID associated with it, however I can find little if any information on it. I am assuming this GUID is generated at the time the first DC in a domain is created, and that it would live on for the life of the domain. Does anyone else have any inner knowledge and would be kind enough to confirm or deny my assumptions?

    Read the article

  • Clang warning flags for Objective-C development

    - by Macmade
    As a C & Objective-C programmer, I'm a bit paranoid with the compiler warning flags. I usually try to find a complete list of warning flags for the compiler I use, and turn most of them on, unless I have a really good reason not to turn it on. I personally think this may actually improve coding skills, as well as potential code portability, prevent some issues, as it forces you to be aware of every little detail, potential implementation and architecture issues, and so on... It's also in my opinion a good every day learning tool, even if you're an experienced programmer. For the subjective part of this question, I'm interested in hearing other developers (mainly C, Objective-C and C++) about this topic. Do you actually care about stuff like pedantic warnings, etc? And if yes or no, why? Now about Objective-C, I recently completely switched to the LLVM toolchain (with Clang), instead of GCC. On my production code, I usually set this warning flags (explicitly, even if some of them may be covered by -Wall): -Wall -Wbad-function-cast -Wcast-align -Wconversion -Wdeclaration-after-statement -Wdeprecated-implementations -Wextra -Wfloat-equal -Wformat=2 -Wformat-nonliteral -Wfour-char-constants -Wimplicit-atomic-properties -Wmissing-braces -Wmissing-declarations -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-noreturn -Wmissing-prototypes -Wnested-externs -Wnewline-eof -Wold-style-definition -Woverlength-strings -Wparentheses -Wpointer-arith -Wredundant-decls -Wreturn-type -Wsequence-point -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wsign-conversion -Wstrict-prototypes -Wstrict-selector-match -Wswitch -Wswitch-default -Wswitch-enum -Wundeclared-selector -Wuninitialized -Wunknown-pragmas -Wunreachable-code -Wunused-function -Wunused-label -Wunused-parameter -Wunused-value -Wunused-variable -Wwrite-strings I'm interested in hearing what other developers have to say about this. For instance, do you think I missed a particular flag for Clang (Objective-C), and why? Or do you think a particular flag is not useful (or not wanted at all), and why?

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • Ask the Readers: Which Search Engine Do You Use?

    - by Mysticgeek
    While Google dominates the search engine market, there are certainly other alternatives out there such as Bing and Yahoo. Today we’re curious about which one you use, and would you ever consider another one? Believe it or not…not everyone uses Google (surprising indeed), there are several other alternatives out there that some of you may be using and we’re interested in hearing about it. One of the more unique and interesting ones we previously covered is ixquick, which doesn’t save your IP or any information and can be customized quite nicely if you’re the paranoid type. We’re interested in hearing about which search engine you currently use. Would you ever switch to a different one? Have you ever tried to experiment and not use Google (or your favorite engine) for a week? Leave a comment below and join in the discussion! Similar Articles Productive Geek Tips A Few Things I’ve Learned from Writing at How-To GeekModify Firefox’s Search Bar Behavior with SearchLoad OptionsGain Access to a Search Box in Google ChromeSearch Alternative Search Engines from within Bing’s Search PageCombine the Address & Search Bars in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network?

    Read the article

  • User Acceptance Testing Defect Classification when developing for an outside client

    - by DannyC
    I am involved in a large development project in which we (a very small start up) are developing for an outside client (a very large company). We recently received their first output from UAT testing of a fairly small iteration, which listed 12 'defects', triaged into three categories : Low, Medium and High. The issue we have is around whether everything in this list should be recorded as a 'defect' - some of the issues they found would be better described as refinements, or even 'nice-to-haves', and some we think are not defects at all. They client's QA lead says that it is standard for them to label every issues they identify as a defect, however, we are a bit uncomfortable about this. Whilst the relationship is good, we don't see a huge problem with this, but we are concerned that, if the relationship suffers in the future, these lists of 'defects' could prove costly for us. We don't want to come across as being difficult, or taking things too personally here, and we are happy to make all of the changes identified, however we are a bit concerned especially as there is a uneven power balance at play in our relationship. Are we being paranoid here? Or could we be setting ourselves up for problems down the line by agreeing to this classification?

    Read the article

  • Removed Java replaced with newest "Sun Java", disc won't boot, and won't let me re-install grub using boot repair disc

    - by Al Rowe
    Had a minor problem with my Stock market platform. Set-up screen would freeze program. Called their tech support, got their "Linux guy", who advised remove all Java and replace, not with synaptic version, but newest Sun Java. After removing, computer auto rebooted, and went to blue mem-test screen. Showed no errors, but couldn't get back in. Tried two versions of boot repair disc from iso (checked md5sum, showed good.), but fix aborted, giving apt-error detected. Opened a terminal and typed (or copy/paste): sudo chroot "/mnt/boot-sav/sda1" apt-get -f install. My system is Ubuntu 12.04. Had a few very minor issues from install, all fixed. Also added some of my favorite gnome tricks just to make life easier, but none that could have caused this. Added script to add shortcuts to desktop, open terminal in any menu from inside it, access root terminal, etc. System was firewalled and using avast antivirus (o.k., I'm paranoid. Used to do Windows sys-op and security.) But relative newbie to Linux.

    Read the article

  • What exactly do I have to pay attention for when choosing Windows Hosting Provider?

    - by user850010
    This is my first time choosing a hosting company. It is for a web site made in asp.net mvc3. So I was thinking choosing a provider would be easy since I found this page http://www.microsoft.com/web/Hosting/Home which contains hosting offers. Now hours later, I am still searching. The reason is that as soon as I start investigating about particular company, something stands out that I do not like. Here are some examples what I noticed when checking various companies in more detail: Company "about us" page is lacking in information about their company. Few of them had just general description what they do and nothing else, while some others had information like company name but had no address. Checking company name in Business Registry Searches gave no results. Two of the companies I checked had both company name and address but I was unable to find them in the registry. Putting company domain into Google gave mostly results from that domain or web hosting review sites but not much else. I am assuming that good companies should have search results from other sites too. Low Alexa Traffic Rank. There was one company which had a site that looked very professional but their alexa traffic ranking was like 2 million. Are there any other factors I should pay attention to when choosing a hosting company? Do I have legitimate concerns or am I just too paranoid?

    Read the article

  • Newbie tips, please [closed]

    - by eXeP
    So, I just got a new computer and I want to put Ubuntu on my old laptop. I just need few tips before installing it. 1.Programs, where to download, how to download, what is the "ending" (windows has .exe) 2. How much is command line involved? And where to get the most usual commands? 3.Few programs you recommend (graphics editing, IDE, video player, web browser) 4. Do I have to download drivers when installing new OS? I plan on getting fully rid of Windows. I have no idea of the name of my graphics card, so how do I can get to know what it is if I have to download drivers? (I don't know the name because it's not on the original box, or anywhere on the internet, believe me) 5. When installing new OS does it destroy everything else on the hard drive? 6. Anti-virus, do I need one? I'm not super paranoid, and I don't visit "shady" sites. Please note that I have never used linux, or any other OS than Windows and sorry for my bad english. If this is the wrong place to post this, then please remove this. Thank you.

    Read the article

  • Facebook - Isn't this a big vulnerability risk for users? (After Password Change)

    - by Trufa
    I would like to know you opinions as programmers / developers. When I changed my Facebook password yesterday, by mistake I entered the old one and got this: Am I missing something here or this is a big potencial risk for users. In my opinion this is a problem BECAUSE it is FaceBook and is used by, well, everyone and the latest statistics show that 76.3% of the users are idiots [source:me], that is more that 3/4!! All kidding aside: Isn't this useful information for an attacker? It reveals private information about the user! It could help the attacker gain access to another site in which the user used the same password Granted, you should't use use the same password twice (but remember: 76.3%!!!) Doesn't this simply increase the surface area for attackers? It increases the chances of getting useful information at least. In a site like Facebook 1st choice for hackers and (bad) people interested in valued personal information shouldn't anything increasing the chance of a vulnerability be removed? Am I missing something? Am I being paranoid? Will 76.3% of the accounts will be hacked after this post? Thanks in advance!! BTW if you want to try it out, a dummy account: user: [email protected] (old) password: hunter2

    Read the article

  • Is there supposed to be a Windows Network folder in the file manager?

    - by Cindy
    I pulled my hard drive out of my computer and started with a bootable usb version of Ubuntu, which I am using that at this point. At first boot, I see that there is a Windows folder when browsing network. Since there is no operating system present, besides the usb that I boot from, should there be a Windows network folder? Original question First of all I just want to say, I wish I had tried Ubuntu a couple years ago when I first heard about it, but I was like a lot of the population and went with the "easy way" and stuck with Windows because I didn't want to take the time to learn something new. Well, about 3 months ago I realized someone had hacked into my computer, and then found they had hacked my facebook account so I decided I had better do a complete credit check. I found student loans (totalling about 30,000 so far) had recently showed up on my credit report. I think it's going to be a long, long road to recovery now but I'm hoping Ubuntu will be a start and definitely an eye opener. My relationship with Windows is over. I had 3 antivirus programs running, none were protecting me like I thought they were. Turned out a free program that I downloaded was the only one that could detect and clean the virus, but by then it was too late. Anyhow, my question is, I pulled my hard drive out of my computer and started with a bootable usb version of Ubuntu, which I am using that at this point. At first boot, I see that there is a Windows folder when browsing network. Since there is no operating system present, besides the usb that I boot from, should there be a Windows network folder? I am using a local ISP (and won't be much longer because I am very paranoid at this point) and I want to make sure all is ok before I put my new hard drive in and install Ubuntu. Any help would be appreciated. Also, I want to thank Ubuntu and the community for giving people an alternative.

    Read the article

  • Should I use my real name in my open source project?

    - by Jardo
    I developed a few freeware programs in the past which I had signed with my pseudonym Jardo. I'm now planning to release my first open source project and was thinking of using my full real name in the project files (as the "author"). I thought it would be good to use my name as my "trademark" so if someone (perhaps a future headhunter) googles my name, they'll find my projects. But on the other side, I feel a bit paranoid about disclosing my name (in the least case I could be getting a lot of spam to my email, its not that hard to guess your private email from your name). What do you think can be "dangerous" on disclosing your full name? What are the pros and cons? Do you use your real name or a pseudonym in your projects? I read this question: What are the advantages and disadvantages to using your real name online? but that doesn't apply to me bacause it's about using your real name online (internet discussions, profiles, etc.) where I personally see no reason to use my real name... And there is also this question: Copyrighting software, templates, etc. under real name or screen name? which deals with creating a business or a brand which also doesn't apply to me because I will never sell/give away my open source project and if someone else joins in, they can write their name as co-author without any problems...

    Read the article

  • Disposables, Using & Try/Catch Blocks

    - by Aren B
    Having a mental block today, need a hand verifying my logic isn't fubar'ed. Traditionally I would do file i/o similar to this: FileStream fs = null; // So it's visible in the finally block try { fs = File.Open("Foo.txt", FileMode.Open); /// Do Stuff } catch(IOException) { /// Handle Stuff } finally { if (fs != null) fs.Close(); } However, this isn't very elegant. Ideally I'd like to use the using block to dispose of the filestream when I'm done, however I am unsure about the synergy between using and try/catch. This is how i'd like to implement the above: try { using(FileStream fs = File.Open("Foo.txt", FileMode.Open)) { /// Do Stuff } } catch(Exception) { /// Handle Stuff } However, I'm worried that a premature exit (via thrown exception) from within the using block may not allow the using block to complete execution and clean up it's object. Am I just paranoid, or will this actually work the way I intend it to?

    Read the article

  • Is there any real benefit to using ASP.Net Authentication with ASP.Net MVC?

    - by alchemical
    I've been researching this intensely for the past few days. We're developing an ASP.Net MVC site that needs to support 100,000+ users. We'd like to keep it fast, scalable, and simple. We have our own SQL database tables for user and user_role, etc. We are not using server controls. Given that there are no server controls, and a custom membershipProvider would need to be created, where is there any benefit left to use ASP.Net Auth/Membership? The other alternative would seem to be to create custom code to drop a UniqueID CustomerID in a cookie and authenticate with that. Or, if we're paranoid about sniffers, we could encrypt the cookie as well. Is there any real benefit in this scenario (MVC and customer data is in our own tables) to using the ASP.Net auth/membership framework, or is the fully custom solution a viable route?

    Read the article

  • How to prevent DOS attacks using image resizing in an ASP.NET application?

    - by Waleed Eissa
    I'm currently developing a site where users can upload images to use as avatars, I know this makes me sound a little paranoid but I was wondering what if a malicious user uploads an image with incredibly large dimensions that will eat the server memory (as a DOS attack), I already have a limit on the file size that can be uploaded (250 k) but even that size can allow for an image with incredibly large dimensions if the image for example is a JPEG that contains one color and created with a very low quality setting. Taking into consideration that the image is uploaded as a bitmap in memory when being resized (ie. not compressed), I wonder if such DOS attacks occur, even to check the image dimensions it has to be uploaded in memory first, did you hear about any attacks that exploited this? Am I too worried?

    Read the article

  • Should I obscure primary key values?

    - by Scott
    I'm building a web application where the front end is a highly-specialized search engine. Searching is handled at the main URL, and the user is passed off to a sub-directory when they click on a search result for a more detailed display. This hand-off is being done as a GET request with the primary key being passed in the query string. I seem to recall reading somewhere that exposing primary keys to the user was not a good idea, so I decided to implement reversible encryption. I'm starting to wonder if I'm just being paranoid. The reversible encryption (base64) is probably easily broken by anybody who cares to try, makes the URLs very ugly, and also longer than they otherwise would be. Should I just drop the encryption and send my primary keys in the clear?

    Read the article

  • How should I deploy a patch to a Passenger-based production Rails application without downtime?

    - by Olly
    I have a Passenger-based production Rails application which has thousands of users. Occasionally we need to apply a code patch (we use git) and the current process for doing this (you can assume there are no data migrations) is: Perform git pull origin [production-branch-name] on the server touch tmp/restart.txt to restart Passenger This allows us to patch the server without having to resort to putting up a maintenance page, which is great, but it doesn't feel quite right since it's not actually a proper 'deployment', and we still need to manually update the revision file and our deployment doesn't appear in the Hoptoad or NewRelic services we use. Ideally I would run cap production deploy and just let the standard Capistrano deployment script take care of everything, but is this a dangerous thing to do without putting up a maintenance page? This deployment process seems to be fairly safe in that the new revision is deployed to a completely separate folder and only right at the end of the process is a symlink re-created to switch the currently deployed version, but I'm still fairly paranoid about this somehow resulting in a lost or failed request.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >