Search Results

Search found 455 results on 19 pages for 'codys hole'.

Page 16/19 | < Previous Page | 12 13 14 15 16 17 18 19  | Next Page >

  • What is the benefit of using ONLY OpenID authentication on a site?

    - by Peter
    From my experience with OpenID, I see a number of significant downsides: Adds a Single Point of Failure to the site It is not a failure that can be fixed by the site even if detected. If the OpenID provider is down for three days, what recourse does the site have to allow its users to login and access the information they own? Takes a user to another sites content and every time they logon to your site Even if the OpenID provider does not have an error, the user is re-directed to their site to login. The login page has content and links. So there is a chance a user will actually be drawn away from the site to go down the Internet rabbit hole. Why would I want to send my users to another company's website? [ Note: my provider no longer does this and seems to have fixed this problem (for now).] Adds a non-trivial amount of time to the signup To sign up with the site a new user is forced to read a new standard, chose a provider, and signup. Standards are something that the technical people should agree to in order to make a user experience frictionless. They are not something that should be thrust on the users. It is a Phisher's Dream OpenID is incredibly insecure and stealing the person's ID as they log in is trivially easy. [ taken from David Arno's Answer below ] For all of the downside, the one upside is to allow users to have fewer logins on the Internet. If a site has opt-in for OpenID then users who want that feature can use it. What I would like to understand is: What benefit does a site get for making OpenID mandatory?

    Read the article

  • Preventing HTML character entities in locale files from getting munged by Rails3 xss protection

    - by Chris S
    We're building an app, our first using Rails 3, and we're having to build I18n in from the outset. Being perfectionists, we want real typography to be used in our views: dashes, curled quotes, ellipses et al. This means in our locales/xx.yml files we have two choices: Use real UTF-8 characters inline. Should work, but hard to type, and scares me due to the amount of software which still does naughty things to unicode. Use HTML character entities (&#8217; &#8212; etc). Easier to type, and probably more compatible with misbehaving software. I'd rather take the second option, however the auto-escaping in Rails 3 makes this problematic, as the ampersands in the YAML get auto-converted into character entities themselves, resulting in 'visible' &8217;s in the browser. Obviously this can be worked around by using raw on strings, i.e.: raw t('views.signup.organisation_details') But we're not happy going down the route of globally raw-ing every time we t something as it leaves us open to making an error and producing an XSS hole. We could selectively raw strings which we know contain character entities, but this would be hard to scale, and just feels wrong - besides, a string which contains an entity in one language may not in another. Any suggestions on a clever rails-y way to fix this? Or are we doomed to crap typography, xss holes, hours of wasted effort or all thre?

    Read the article

  • Potential problem with C standard malloc'ing chars.

    - by paxdiablo
    When answering a comment to another answer of mine here, I found what I think may be a hole in the C standard (c1x, I haven't checked the earlier ones and yes, I know it's incredibly unlikely that I alone among all the planet's inhabitants have found a bug in the standard). Information follows: Section 6.5.3.4 ("The sizeof operator") para 2 states "The sizeof operator yields the size (in bytes) of its operand". Para 3 of that section states: "When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1". Section 7.20.3.3 describes void *malloc(size_t sz) but all it says is "The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate". It makes no mention at all what units are used for the argument. Annex E startes the 8 is the minimum value for CHAR_BIT so chars can be more than one byte in length. My question is simply this: In an environment where a char is 16 bits wide, will malloc(10 * sizeof(char)) allocate 10 chars (20 bytes) or 10 bytes? Point 1 above seems to indicate the former, point 2 indicates the latter. Anyone with more C-standard-fu than me have an answer for this?

    Read the article

  • Cookie blocked/not saved in IFRAME in Internet Explorer

    - by Piskvor
    I have two websites, let's say they're example.com and anotherexample.net. On anotherexample.net/page.html, I have an IFRAME SRC="http://example.com/someform.asp". That IFRAME displays a form for the user to fill out and submit to http://example.com/process.asp. When I open the form ("someform.asp") in its own browser window, all works well. However, when I load someform.asp as an IFRAME in IE 6 or IE 7, the cookies for example.com are not saved. In Firefox this problem doesn't appear. For testing purposes, I've created a similar setup on http://newmoon.wz.cz/test/page.php . example.com uses cookie-based sessions (and there's nothing I can do about that), so without cookies, process.asp won't execute. How do I force IE to save those cookies? Results of sniffing the HTTP traffic: on GET /someform.asp response, there's a valid per-session Set-Cookie header (e.g. Set-Cookie: ASPKSJIUIUGF=JKHJUHVGFYTTYFY), but on POST /process.asp request, there is no Cookie header at all. Edit3: some AJAX+serverside scripting is apparently capable to sidestep the problem, but that looks very much like a bug, plus it opens a whole new set of security holes. I don't want my applications to use a combination of bug+security hole just because it's easy. Edit: the P3P policy was the root cause, full explanation below.

    Read the article

  • How do you use stl's functions like for_each?

    - by thomas-gies
    I started using stl containers because they came in very handy when I needed functionality of a list, set and map and had nothing else available in my programming environment. I did not care much about the ideas behind it. STL documentations were only interesting up to the point where it came to functions, etc. Then I skipped reading and just used the containers. But yesterday, still being relaxed from my holidays, I just gave it a try and wanted to go a bit more the stl way. So I used the transform function (can I have a little bit of applause for me, thank you). From an academic point of view it really looked interesting and it worked. But the thing that boroughs me is that if you intensify the use of those functions, you need 10ks of helper classes for mostly everything you want to do in your code. The hole logic of the program is sliced in tiny pieces. This slicing is not the result of god coding habits. It's just a technical need. Something, that makes my life probably harder not easier. And I learned the hard way, that you should always choose the simplest approach that solves the problem at hand. And I can't see what, for example, the for_each function is doing for me that justifies the use of a helper class over several simple lines of code that sit inside a normal loop so that everybody can see what is going on. I would like to know, what you are thinking about my concerns? Did you see it like I do when you started working this way and have changed your mind when you got used to it? Are there benefits that I overlooked? Or do you just ignore this stuff as I did (and will go an doing it, probably). Thanks. PS: I know that there is a real for_each loop in boost. But I ignore it here since it is just a convenient way for my usual loops with iterators I guess.

    Read the article

  • Rewarding iOS app beta testers with in app purchase?

    - by Partridge
    My iOS app is going to be free, but with additional functionality enabled via in app purchase. Currently beta testers are doing a great job finding bugs and I want to reward them for their hard work. I think the least I can do is give them a full version of the app so that they don't have to buy the functionality themselves. However, I'm not sure what the best way to do this is. There do not appear to be promo codes for in app purchase so I can't just email out promo codes. I have all the tester device UDIDs so when the app launches I could grab the device UDID and compare it to an internal list of 'approved' UDIDs. Is this what other developers do? My concerns: The in app purchase content would not be tied to their iTunes account, so if beta testers move to a new device they would not be able to enable the content unless I released a new build in the app store with their new UDID. So they may have to buy it eventually anyway. Having an internal list leaves a hole for hackers to modify the list and add themselves to it. What would you do?

    Read the article

  • Ruby core documentation quality

    - by karatedog
    I'm relatively new to Ruby and have limited time therefore I try out simple things. Recently I needed to create a file and because I'm lazy as hell, I run to Google. The result: File.open(local_filename, 'w') {|f| f.write(doc) } Shame on me, it is very straightforward, should have done it myself. Then I wanted to check what ruby magic the File class' methods offer or if there's any 'simplification' when invoking those methods, so I headed for the documentation here, and checked for the File class. 1.8.6 documentation presents me with "ftools.rb: Extra tools for the File class" under 'File' class, which is not what I'm looking for. 1.8.7 documentation seems OK for 'File' class, there are a plethora of methods. Except 'open'. 1.9 documentation finally shows me the 'open' method. And I had an almost same tour with Net::HTTP. Do I exaggerate when I think good old Turbo Pascal's 7.0 documentation was better organized than Ruby documentation is right now? Is there any other source for the uninitiated to collect knowledge? Or is it possible that I just tumbled into a documentation hole and the rest are super-brilliant-five-star organized? Thanks

    Read the article

  • How might one cope with the ambiguous value produced by GetDllDirectory?

    - by Integer Poet
    GetDllDirectory produces an ambiguous value. When the string this call produces is empty, it means one of the following: nobody has called SetDllDirectory somebody passed NULL to SetDllDirectory somebody passed an empty string to SetDllDirectory The first two cases are equivalent for my purposes, but the third case is a problem. If I want to write save/restore code (call GetDllDirectory to save the "old" value, SetDllDirectory to set a "new" value temporarily, and later SetDllDirectory again to restore the "old" value), I run the risk of reversing some other programmer's intent. If the other programmer intended for the current working directory to be in the DLL search order (in other words, one of the first two bullets is true), and I pass an empty string to SetDllDirectory, I will be taking the current working directory out of the DLL search order, reversing the other programmer's intent. Can anyone suggest an approach to eliminate or work around this ambiguity? P.S. I know having the current working directory in the DLL search order could be interpreted as a security hole. Nevertheless, it is the default behavior, and my code is not in a position to undo that; my code needs to be compatible with the expectations of all potential callers, many of which are large and old and beyond my control.

    Read the article

  • protect form hijacking hack

    - by Karem
    Yes hello today I discovered a hack for my site. When you write a msg on a users wall (in my communitysite) it runs a ajax call, to insert the msg to the db and will then on success slide down and show it. Works fine with no problem. So I was rethinking alittle, I am using POST methods for this and if it was GET method you could easily do ?msg=haxmsg&usr=12345679. But what could you do to come around the POST method? I made a new html document, made a form and on action i set "site.com/insertwall.php" (the file that normally are being used in ajax), i made some input fields with names exactly like i am doing with the ajaxcall (msg, uID (userid), BuID (by userid) ) and made a submit button. I know I have a page_protect() function on which requires you to login and if you arent you will be header to index.php. So i logged in (started session on my site.com) and then I pressed on this submit button. And then wops I saw on my site that it has made a new message. I was like wow, was it so easy to hijack POST method i thought maybe it was little more secure or something. I would like to know what could I do to prevent this hijacking? As i wouldnt even want to know what real hackers could do with this "hole". The page_protect secures that the sessions are from the same http user agent and so, and this works fine (tried to run the form without logging in, and it just headers me to startpage) but yea wouldnt take long time to figure out to log in first and then run it. Any advices are appreciated alot. I would like to keep my ajax calls most secure as possible and all of them are running on the POST method. What could I do to the insertwall.php, to check that it comes from the server or something.. Thank you

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • Giving writing permissions for IIS user at Windows 2003 Server

    - by Steve
    I am running a website over Windows 2003 Server and IIS6 and I am having problems to write or delete files in some temporary folder obtaining this kind of warmings: Warning: unlink(C:\Inetpub\wwwroot\cakephp\app\tmp\cache\persistent\myapp_cake_core_cake_): Permission denied in C:\Inetpub\wwwroot\cakephp\lib\Cake\Cache\Engine\FileEngine.php on line 254 I went to the tmp directory and at the properties I gave the IIS User the following permissions: Read & Execute List folder Contents Read And it still showing the same warnings. When I am on the properties window, if I click on Advanced the IIS username appears twice. One with Allow type and read & execute permissions and the other with Deny type and Special permissions. My question is: Should I give this user not only the Read & Execute permissions but also this ones?: Create Attributes Create Files/ Write Data Create Folders/ Append Data Delete Subfolders and Files Delete They are available to select if I Click on the edit button over the username. Wouldn't I be opening a security hole if I do this? Otherwise, how can I do to read and delete the files my website uses? Thanks.

    Read the article

  • MediaWiki: how to hide users from the user list?

    - by Dave Everitt
    I've set up Mediawiki 1.15.1 for a client who has added two users by mistake. They now want to hide these users from the user list. It seems this is done via the $wgGroupPermissions array with $wgGroupPermissions['suppress']['hideuser'] = true;, but it isn't at all clear what entry this needs for the hiding to work, or whether a new group ('hidden' or whatever) has to be created first with $wgAddGroups['bureaucrat'] = true;. For now, I've added the two users to be hidden to the 'Oversight' group which explains 'Block a username, hiding it from the public (hideuser)', but they still appear on the Special:ListUsers page. At a loss as to how the MediWiki arrays alter options displayed in the interface, so far I've added this to LocalSettings.php: $wgGroupPermissions['suppress']['hideuser'] = true; $wgAddGroups['supress'] = true; Or - since they haven't actually added anything to the wiki - could they simply be removed from the MySQL users table - although MediaWiki warns against this? Has anyone else done this successfully? Update - this is a hole in MediaWiki admin (although there are workarounds). See this thread on MediaWIki Users and the note to the reply below.

    Read the article

  • Apache Cordova (Phonegap): is jsop needed for cross-site scripting?

    - by DEX
    I've just started using Apache Cordova. I have an library that makes calls (via ajax) to a soap server. When I run these on my local machine in chrome, I get cross site scripting errors when trying to make calls to the service. When I run the same exact code using the Cordova browser in the iOS emulator, the scripts seem to hit the server fine and the response data is received properly. So my question is how is the Cordova browser able to make these requests without cross-site scripting permissions & JSONP ? One thing I noticed is that when the request is sent from iOS, there is no "Origin" header. Is this allowing the Cordova browser to stealthily circumvent cross-site scripting requirements? Is it possible that the node.js server on the device (I believe this is how Cordova works) is manipulating the headers to allow this? I'd like to avoid enabling cross-site scripting on my site so I think this "feature" is nice, but I'm wondering if it's a security hole as well. Anyone have experience with this?

    Read the article

  • PHP Mailer Class - Securing Email Credentials

    - by Alan A
    I am using the php mailer class to send email via my scripts. The structure is as follows: $mail = new PHPMailer; $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication $mail->Username = '[email protected]'; // SMTP username $mail->Password = 'user123'; // SMTP password $mail->SMTPSecure = 'pass123'; It seems to me to be a bit of a security hole having the mailbox credentials in plain view. So I thought I might put these in an external file outside of the web root. My question is how would I then assign the $mail object these values. I of course no how to use include and/or requires... would it simple be a case of.... $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication includes '../locationOutsideWebroot/emailCredntials.php'; $mail->SMTPSecure = 'pass123'; Then emailCredentails.php: <?php $mail->Username = '[email protected]'; $mail->Password = 'user123'; ?> Would this be sufficient and secure enough? Thanks, Alan.

    Read the article

  • Flash Player, security: If a URL starts with "http://" will the SWF always be loaded into REMOTE san

    - by Pavel
    Seems to be a question for a Flash security guru. Suppose we are loading an external SWF movie with MovieClipLoader.loadMovie(url:String) Is it safe to assume that if url starts with "http://", the movie will be loaded in REMOTE sandbox? We need to tell local SWFs from remote ones to close a security hole. If you need the context read on. We have developed a Projector, written in C++ embedding Flash Player ActiveX. Our Flash application runs inside the Projector. Soon we want to give our users a way to create plugins for the application. The plugins are obviously will be SWF movies. The case I'm afraid of is the following. A bad person creates a malicious evil.swf pretending it to be nice plugin for our app. In case evil.swf is loaded from the local file system it is granted an access to the whole MovieClip tree and Projector API, opening C++ file access operations. On the other hand if evil.swf is loaded from the internet, remotely, it will be locked in REMOTE sandbox by Flash security model. Because of this, we need a reliable way to tell local SWF from remote one before loading it. And we must not make a mistake. So again, is it safe to assume that if url begins with "http://", the clip will be loaded inside REMOTE sandbox?

    Read the article

  • Modifying Django's pre_save/post_save Data

    - by Rodrogo
    Hi, I'm having a hard time to grasp this post_save/pre_save signals from django. What happens is that my model has a field called status and when a entry to this model is added/saved, it's status must be changed accordingly with some condition. My model looks like this: class Ticket(models.Model): (...) status = models.CharField(max_length=1,choices=OFFERT_STATUS, default='O') And my signal handler, configured for pre_save: def ticket_handler(sender, **kwargs): ticket = kwargs['instance'] (...) if someOtherCondition: ticket.status = 'C' Now, what happens if I put aticket.save() just bellow this last line if statement is a huge iteration black hole, since this action calls the signal itself. And this problem happens in both pre_save and post_save. Well... I guess that the capability of altering a entry before (or even after) saving it is pretty common in django's universe. So, what I'm doing wrong here? Is the Signals the wrong approach or I'm missing something else here? Also, would it be possible to, once this pre_save/post_save function is triggered, to access another model's instance and change a specific row entry on that? Thanks

    Read the article

  • SQL SERVER – Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)

    - by pinaldave
    Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers. In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave. A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole. There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services. DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent. Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database. Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out. SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012 SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • broken upgrade from 10.04 to 12.04 on a VPS - recoverable?

    - by HorusKol
    I have a VPS hosted 1500 km away. It originally came with 9.10 - and this morning I decided that I really should get to an LTS release, and figured I'd jump to 12.04. Researching, I discovered that there is no direct path between 9.10 and 12.04, but that I could upgrade via 10.04. After backing up my data, I dove in. The upgrade to 10.04 was successful, and I proceeded to upgrade to 12.04. Things started to go wrong. First, I got an error with GLIBC - I retried and got the same error. That's when I stopped the upgrade. I then tried another round of apt-get update && apt-get upgrade and got a list of "unmet dependencies": apt: Depends: ubuntu-keyring but it is not going to be installed Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed PreDepends: dpkg (>= 1.15.7.2) but 1.15.5.6ubuntu4.6 is to be installed apt-utils: Depends: libapt-pkg-libc6.10-6-4.8 libapt-inst1.4: Depends: libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed libapt-pkg4.12: Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed libc6: Depends: libc-bin (= 2.11.1-0ubuntu7.11) but 2.15-0ubuntu10.2 is to be installed libept0: Depends: libapt-pkg-libc6.10-6-4.8 libnih-dbus1: Depends: libnih1 (= 1.0.3-4ubuntu9) but 1.0.1-1 is to be installed I tried to see if I could do something about these - using apt-get -f install. This told me that I would need to upgrade my kernel. I found instructions on how to do this, but when I ran apt-get to install the new linux headers, I got the same dependency errors. I found another answer here where someone else had had an interruption in their upgrade - and tried the solution that worked for them: sudo apt-get -f dist-upgrade This resulted in the error: E: Could not perform immediate configuration on 'python2.7-minimal'.Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) I tried to resolve this by: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal But this simply ended up with this last list of dependency errors: apt: Depends: ubuntu-keyring but it is not going to be installed Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed PreDepends: dpkg (>= 1.15.7.2) but 1.15.5.6ubuntu4.6 is to be installed apt-utils: Depends: libapt-pkg-libc6.10-6-4.8 libapt-inst1.4: Depends: libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed libapt-pkg4.12: Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed libc6: Depends: libc-bin (= 2.11.1-0ubuntu7.11) but 2.15-0ubuntu10.2 is to be installed libept0: Depends: libapt-pkg-libc6.10-6-4.8 libnih-dbus1: Depends: libnih1 (= 1.0.3-4ubuntu9) but 1.0.1-1 is to be installed python: Depends: python-minimal (= 2.6.5-0ubuntu1) but 2.7.3-0ubuntu2 is to be installed python-apt: Depends: libapt-pkg-libc6.10-6-4.8 python-minimal: Depends: python2.7-minimal (>= 2.7.3) but it is not going to be installed Breaks: python-support (< 1.0.10ubuntu2) but 1.0.4ubuntu1 is to be installed synaptic: Depends: libapt-pkg-libc6.10-6-4.8 Any ideas on how to dig out of this hole?

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • Disable Acer eRecovery system

    - by Joel Coehoorn
    The meat of this question is that I'm looking for a way to either require a password before using a recovery partition or "break" the recovery partition (specifically, Acer eRecovery) in a way that I can later "unbreak" only by booting normally into windows first. Here's the full details: I have a set of new Acer Veriton n260g machines in a computer lab. A lot work went into setting up this lab to work well - for example, Office 2007 and other programs needed by the students were installed, all windows updates are applied, and a default desktop is setup. All in all it's several hours work to fully set up one machine. Unfortunately, I don't currently have the ability to easily image these machines, and even if I did I would want to avoid downtime even while an image is restored. Therefore, I've taken steps to lock them down — namely DeepFreeze and a bios password to prevent booting from anywhere but the frozen hard drive. DeepFreeze is an amazing product — as long as you boot from the frozen hard drive, there is no way to actually make permanent changes to that hard drive. Anything you do is wiped after the machine restarts. It lets me give students the leeway to do what they want on lab computers without worrying about them breaking something. The problem is that even with the bios locked and set to only boot from the hard drive, these Acers still have a simple way to choose a different boot source: shut them down and put a paper click in a little hole at the top while you turn it on again. This puts them into the "Acer eRecovery" mode. This by itself is no big deal — you can still power cycle with no impact. But if you then click through the menu to reset the machine (we're now past the point of curiosity and on to intent) it will wipe the hard drive and restore it to the original state. Of course, a few students have already figured this out and reset a couple machines. That's unfortunate, but inevitable. I don't want to destroy the ability to do this entirely (which I could by repartitioning the drives to remove the recovery partition) but I would like a way to require a password first, or "break" the recovery system in a way that I can "unbreak" only if I first un-freeze the hard drive in DeepFreeze. Any ideas?

    Read the article

  • Need help making site available externally

    - by White Island
    I'm trying to open a hole in the firewall (ASA 5505, v8.2) to allow external access to a Web application. Via ASDM (6.3?), I've added the server as a Public Server, which creates a static NAT entry [I'm using the public IP that is assigned to 'dynamic NAT--outgoing' for the LAN, after confirming on the Cisco forums that it wouldn't bring everyone's access crashing down] and an incoming rule "any... public_ip... https... allow" but traffic is still not getting through. When I look at the log viewer, it says it's denied by access-group outside_access_in, implicit rule, which is "any any ip deny" I haven't had much experience with Cisco management. I can't see what I'm missing to allow this connection through, and I'm wondering if there's anything else special I have to add. I tried adding a rule (several variations) within that access-group to allow https to the server, but it never made a difference. Maybe I haven't found the right combination? :P I also made sure the Windows firewall is open on port 443, although I'm pretty sure the current problem is Cisco, because of the logs. :) Any ideas? If you need more information, please let me know. Thanks Edit: First of all, I had this backward. (Sorry) Traffic is being blocked by access-group "inside_access_out" which is what confused me in the first place. I guess I confused myself again in the midst of typing the question. Here, I believe, is the pertinent information. Please let me know what you see wrong. access-list acl_in extended permit tcp any host PUBLIC_IP eq https access-list acl_in extended permit icmp CS_WAN_IPs 255.255.255.240 any access-list acl_in remark Allow Vendor connections to LAN access-list acl_in extended permit tcp host Vendor any object-group RemoteDesktop access-list acl_in remark NetworkScanner scan-to-email incoming (from smtp.mail.microsoftonline.com to PCs) access-list acl_in extended permit object-group TCPUDP any object-group Scan-to-email host NetworkScanner object-group Scan-to-email access-list acl_out extended permit icmp any any access-list acl_out extended permit tcp any any access-list acl_out extended permit udp any any access-list SSLVPNSplitTunnel standard permit LAN_Subnet 255.255.255.0 access-list nonat extended permit ip VPN_Subnet 255.255.255.0 LAN_Subnet 255.255.255.0 access-list nonat extended permit ip LAN_Subnet 255.255.255.0 VPN_Subnet 255.255.255.0 access-list inside_access_out remark NetworkScanner Scan-to-email outgoing (from scanner to Internet) access-list inside_access_out extended permit object-group TCPUDP host NetworkScanner object-group Scan-to-email any object-group Scan-to-email access-list inside_access_out extended permit tcp any interface outside eq https static (inside,outside) PUBLIC_IP LOCAL_IP[server object] netmask 255.255.255.255 I wasn't sure if I needed to reverse that "static" entry, since I got my question mixed up... and also with that last access-list entry, I tried interface inside and outside - neither proved successful... and I wasn't sure about whether it should be www, since the site is running on https. I assumed it should only be https.

    Read the article

  • Email server can send internal, but messages never arrive at external recipients

    - by Chase Florell
    I'm running MailEnable on my server, and have been for many years. Recently we had an attack on our server, and I was able to close the hole. Since then, our mail server doesn't seem to be sending mail out. If I send an email from myself to another account hosted on the server, the email arrives as expected. If I send an email from my gmail account to my business account, the email also arrives as expected The problem comes when I send from my business account to an external domain I tried the following Gmail.com Hotmail.com Shaw.ca When I send to any of the above The message leaves my client as expected, The logs appear to accept and forward on the message The SMTP outbound que is empty The message never arrives I have checked our domain with mxtoolbox.com senderbase.org And neither of them are reporting any problems with our domain. I have ensured that port 25 is open (along with the other standard ports) Here is one of the log entries from the SMTP connector 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 220 mx1.example.com ESMTP MailEnable Service, Version: 6.81--6.81 ready at 11/05/13 12:10:00 0 0 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 EHLO EHLO ASSP.nospam 250-mx1.example.com [127.0.0.1], this server offers 6 extensions 159 18 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 EHLO EHLO ASSP.nospam 250-mx1.example.com [127.0.0.1], this server offers 6 extensions 159 18 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH AUTH LOGIN 334 VXNlcm5hbWU6 18 12 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH {blank} 334 UGFzc3dvcmQ6 18 26 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH Y29sb25lbGZhY2U= 235 Authenticated 19 18 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 MAIL MAIL FROM:<[email protected]> 250 Requested mail action okay, completed 43 31 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 RCPT RCPT TO:<[email protected]> 250 Requested mail action okay, completed 43 35 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 DATA DATA 354 Start mail input; end with <CRLF>.<CRLF> 46 6 [email protected] Here are the headers of the sent message X-Assp-Version: 1.7.5.7(1.0.07) on ASSP.nospam X-Assp-ID: ASSP.nospam 78601-04523 X-Assp-Intended-For: [email protected] X-Assp-Envelope-From: [email protected] Received: from [10.10.1.101] ([68.147.245.149] helo=[10.10.1.101]) with IPv4:587 by ASSP.nospam; 5 Nov 2013 12:10:00 -0700 From: Chase Florell <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: 7bit Subject: Test Message Message-Id: <[email protected]> Date: Tue, 5 Nov 2013 12:10:18 -0700 To: Chase Florell <[email protected]> Mime-Version: 1.0 (Mac OS X Mail 7.0 \(1816\)) X-Mailer: Apple Mail (2.1816) . Where else can I check to see if there is something broken? What could cause a problem like this whereby the message appears to send, but never arrives, and never returns a bounce?

    Read the article

  • PHPMyAdmin works with https Only (not http)

    - by 01010011
    Hi I've been having a problem getting phpmyadmin to work consistently on my XP desktop and laptop computers for months now. When I type into Chrome's browser on both machines, localhost/phpmyadmin, I kept getting Error #1045 Access Denied for user at root@localhost (using password yes). Eventually, I realized that I had two (2) versions of mysql installed (XAMPP and MySQL Server 5.1) on both machines. So I uninstalled the MySQL Server 5.1I from the desktop and phpmyadmin worked. But when I uninstalled MySQL Server 5.1 from my laptop, it did not work. But I realized I could still get into MySQL Commandline Client using my password and that my databases were still intact. So I uninstalled and reinstalled XAMPP on the laptop and phpmyadmin worked after that. Now I have a new problem. On phpMyAdmin's home page has a message at the bottom: Your configuration file contains settings (root with no password) that correspond to the default MySQL privileged account. Your MySQL server is running with this default, is open to intrusion, and you really should fix this security hole by setting a password for user 'root'. So I located the following lines in config.inc.php file: /* Authentication type and info */ $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = ''; $cfg['Servers'][$i]['AllowNoPassword'] = true; and I just changed the last 2 lines as follows: $cfg['Servers'][$i]['password'] = 'mypassword'; $cfg['Servers'][$i]['AllowNoPassword'] = false; As soon as I did that and I tried to access phpmyadmin again, I got the Error #1045 message again, but when I tried https://localhost/phpmyadmin/ I got a red page saying this sites certificate is not trusted would you like to proceed anyway. And now it only works using https. I would really like to settle all my phpmyadmin problems once and for all so here are my questions: 1. Why does my laptop only access phpmyadmin via https? 2. How do I change my password in my configuration file? Also, if you have any other tips regarding phpMyAdmin, they are very welcome. Thanks in advance

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • .NET Security Part 2

    - by Simon Cooper
    So, how do you create partial-trust appdomains? Where do you come across them? There are two main situations in which your assembly runs as partially-trusted using the Microsoft .NET stack: Creating a CLR assembly in SQL Server with anything other than the UNSAFE permission set. The permissions available in each permission set are given here. Loading an assembly in ASP.NET in any trust level other than Full. Information on ASP.NET trust levels can be found here. You can configure the specific permissions available to assemblies using ASP.NET policy files. Alternatively, you can create your own partially-trusted appdomain in code and directly control the permissions and the full-trust API available to the assemblies you load into the appdomain. This is the scenario I’ll be concentrating on in this post. Creating a partially-trusted appdomain There is a single overload of AppDomain.CreateDomain that allows you to specify the permissions granted to assemblies in that appdomain – this one. This is the only call that allows you to specify a PermissionSet for the domain. All the other calls simply use the permissions of the calling code. If the permissions are restricted, then the resulting appdomain is referred to as a sandboxed domain. There are three things you need to create a sandboxed domain: The specific permissions granted to all assemblies in the domain. The application base (aka working directory) of the domain. The list of assemblies that have full-trust if they are loaded into the sandboxed domain. The third item is what allows us to have a fully-trusted API that is callable by partially-trusted code. I’ll be looking at the details of this in a later post. Granting permissions to the appdomain Firstly, the permissions granted to the appdomain. This is encapsulated in a PermissionSet object, initialized either with no permissions or full-trust permissions. For sandboxed appdomains, the PermissionSet is initialized with no permissions, then you add permissions you want assemblies loaded into that appdomain to have by default: PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); // all assemblies need Execution permission to run at all restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); // grant general read access to C:\config.xml restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, @"C:\config.xml")); // grant permission to perform DNS lookups restrictedPerms.AddPermission( new DnsPermission(PermissionState.Unrestricted)); It’s important to point out that the permissions granted to an appdomain, and so to all assemblies loaded into that appdomain, are usable without needing to go through any SafeCritical code (see my last post if you’re unsure what SafeCritical code is). That is, partially-trusted code loaded into an appdomain with the above permissions (and so running under the Transparent security level) is able to create and manipulate a FileStream object to read from C:\config.xml directly. It is only for operations requiring permissions that are not granted to the appdomain that partially-trusted code is required to call a SafeCritical method that then asserts the missing permissions and performs the operation safely on behalf of the partially-trusted code. The application base of the domain This is simply set as a property on an AppDomainSetup object, and is used as the default directory assemblies are loaded from: AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = @"C:\temp\sandbox", }; If you’ve read the documentation around sandboxed appdomains, you’ll notice that it mentions a security hole if this parameter is set correctly. I’ll be looking at this, and other pitfalls, that will break the sandbox when using sandboxed appdomains, in a later post. Full-trust assemblies in the appdomain Finally, we need the strong names of the assemblies that, when loaded into the appdomain, will be run as full-trust, irregardless of the permissions specified on the appdomain. These assemblies will contain methods and classes decorated with SafeCritical and Critical attributes. I’ll be covering the details of creating full-trust APIs for partial-trust appdomains in a later post. This is how you get the strongnames of an assembly to be executed as full-trust in the sandbox: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); Creating the sandboxed appdomain So, putting these three together, you create the appdomain like so: AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); You can then load and execute assemblies in this appdomain like any other. For example, to load an assembly into the appdomain and get an instance of the Sandboxed.Entrypoint class, implementing IEntrypoint, you do this: IEntrypoint o = (IEntrypoint)sandbox.CreateInstanceFromAndUnwrap( "C:\temp\sandbox\SandboxedAssembly.dll", "Sandboxed.Entrypoint"); // call method the Execute method on this object within the sandbox o.Execute(); The second parameter to CreateDomain is for security evidence used in the appdomain. This was a feature of the .NET 2 security model, and has been (mostly) obsoleted in the .NET 4 model. Unless the evidence is needed elsewhere (eg. isolated storage), you can pass in null for this parameter. Conclusion That’s the basics of sandboxed appdomains. The most important object is the PermissionSet that defines the permissions available to assemblies running in the appdomain; it is this object that defines the appdomain as full or partial-trust. The appdomain also needs a default directory used for assembly lookups as the ApplicationBase parameter, and you can specify an optional list of the strongnames of assemblies that will be given full-trust permissions if they are loaded into the sandboxed appdomain. Next time, I’ll be looking closer at full-trust assemblies running in a sandboxed appdomain, and what you need to do to make an API available to partial-trust code.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19  | Next Page >