Search Results

Search found 1486 results on 60 pages for 'hexagon theory'.

Page 25/60 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Localization of icon and default screen in iPhone

    - by hgpc
    Can the app icon and default screen be localized in iPhone? Has anyone tried it? In theory it should be possible as they're just image resources, but I found no explicit mention of this in the documentation, and I wouldn't like to have my app rejected or failing for this.

    Read the article

  • .NET Compact Framework app that will run on both Professional and Standard

    - by CJCraft.com
    Is there any guidance on creating apps that will run on both professional (touch-screen) and standard (non-touch-screen) devices. I have a simple application that is mostly text and buttons that in theory should be able to run on both professional and standard devices with little if any modification. It seems the IDE wants to make this hard to impossible, but I expect it to be possible. Any advice?

    Read the article

  • Brighton Rocks: UA Europe 2011

    - by ultan o'broin
    User Assistance Europe 2011 was held in Brighton, UK. Having seen Quadrophenia a dozen times, I just had to go along (OK, I wanted to talk about messages in enterprise applications). Sadly, it rained a lot, though that was still eminently more tolerable than being stuck home in Dublin during Bloomsday. So, here are my somewhat selective highlights and observations from the conference, massively skewed towards my own interests, as usual. Enjoyed Leah Guren's (Cow TC) great start ‘keynote’ on the Cultural Dimensions of Software Help Usage. Starting out by revisiting Hofstede's and Hall's work on culture (how many times I have done this for Multilingual magazine?) and then Neilsen’s findings on age as an indicator of performance, Leah showed how it is the expertise of the user that user assistance (UA) needs to be designed for (especially for high-end users), with some considerations made for age, while the gender and culture of users are not major factors. Help also needs to be contextual and concise, embedded close to the action. That users are saying things like “If I want help on Office, I go to Google ” isn't all that profound at this stage, but it is always worth reiterating how search can be optimized to return better results for users. Interestingly, regardless of user education level, the issue of information quality--hinging on the lynchpin of terminology reflecting that of the user--is critical. Major takeaway for me there. Matthew Ellison’s sessions on embedded help and demos were also impressive. Embedded help that is concise and contextual is definitely a powerful UX enabler, and I’m pleased to say that in Oracle Fusion Applications we have embraced the concept fully. Matthew also mentioned in his session about successful software demos that the principle of modality with demos is a must. Look no further than Oracle User Productivity Kit demos See It!, Try It!, Know It, and Do It! modes, for example. I also found some key takeaways in the presentation by Marie-Louise Flacke on notes and warnings. Here, legal considerations seemed to take precedence over providing any real information to users. I was delighted when Marie-Louise called out the Oracle JDeveloper documentation as an exemplar of how to use notes and instructions instead of trying to scare the bejaysus out of people and not providing them with any real information they’d find useful instead. My own session on designing messages for enterprise applications was well attended. Knowing your user profiles (remember user expertise is the king maker for UA so write for each audience involved), how users really work, the required application business and UI rules, what your application technology supports, and how messages integrate with the enterprise help desk and support policies and you will go much further than relying solely on the guideline of "writing messages in plain language". And, remember the value in warnings and confirmation messages too, and how you can use them smartly. I hope y’all got something from my presentation and from my answers to questions afterwards. Ellis Pratt stole the show with his presentation on applying game theory to software UA, using plenty of colorful, relevant examples (check out the Atlassian and DropBox approaches, for example), and striking just the right balance between theory and practice. Completely agree that the approach to take here is not to make UA itself a game, but to invoke UA as part of a bigger game dynamic (time-to-task completion, personal and communal goals, personal achievement and status, and so on). Sure there are gotchas and limitations to gamification, and we need to do more research. However, we'll hear a lot more about this subject in coming years, particularly in the enterprise space. I hope. I also heard good things about the different sessions about DITA usage (including one by Sonja Fuga that clearly opens the door for major innovation in the community content space using WordPress), the progressive disclosure of information (Cerys Willoughby), an overview of controlled language (or "information quality", as I like to position it) solutions and rationale by Dave Gash, and others. I also spent time chatting with Mike Hamilton of MadCap Software, who showed me a cool demo of their Flare product, and the Lingo translation solution. I liked the idea of their licensing model for workers-on-the-go; that’s smart UX-awareness in itself. Also chatted with Julian Murfitt of Mekon about uptake of DITA in the enterprise space. In all, it's worth attending UA Europe. I was surprised, however, not to see conference topics about mobile UA, community conversation and content, and search in its own right. These are unstoppable forces now, and the latter is pretty central to providing assistance now to all but the most irredentist of hard-copy fetishists or advanced technical or functional users working away on the back end of applications and systems. Only saw one iPad too (says the guy who carries three laptops). Tweeting during the conference was pretty much nonexistent during the event, so no community energy there. Perhaps all this can be addressed next year. I would love to see the next UA Europe event come to Dublin (despite Bloomsday, it's not a bad place place, really) now that hotels are so cheap and all. So, what is my overall impression of the state of user assistance in Europe? Clearly, there are still many people in the industry who feel there is something broken with the traditional forms of user assistance (particularly printed doc) and something needs to be done about it. I would suggest they move on and try and embrace change, instead. Many others see new possibilities, offered by UX and technology, as well as the reality of online user behavior in an increasingly connected world and that is encouraging. Such thought leaders need to be listened to. As Ellis Pratt says in his great book, Trends in Technical Communication - Rethinking Help: “To stay relevant means taking a new perspective on the role (of technical writer), and delivering “products” over and above the traditional manual and online Help file... there are a number of new trends in this field - some complementary, some conflicting. Whatever trends emerge as the norm, it’s likely the status quo will change.” It already has, IMO. I hear similar debates in the professional translation world about the onset of translation crowd sourcing (the Facebook model) and machine translation (trust me, that battle is over). Neither of these initiatives has put anyone out of a job and probably won't, though the nature of the work might change. If anything, such innovations have increased the overall need for professional translators as user expectations rise, new audiences emerge, and organizations need to collate and curate user-generated content, combining it with their own. Perhaps user assistance professionals can learn from other professions and grow accordingly.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • Generating all unique combinations for "drive ya nuts" puzzle

    - by Yuval A
    A while back I wrote a simple python program to brute-force the single solution for the drive ya nuts puzzle. The puzzle consists of 7 hexagons with the numbers 1-6 on them, and all pieces must be aligned so that each number is adjacent to the same number on the next piece. The puzzle has ~1.4G non-unique possibilities: you have 7! options to sort the pieces by order (for example, center=0, top=1, continuing in clockwise order...). After you sorted the pieces, you can rotate each piece in 6 ways (each piece is a hexagon), so you get 6**7 possible rotations for a given permutation of the 7 pieces. Totalling: 7!*(6**7)=~1.4G possibilities. The following python code generates these possible solutions: def rotations(p): for i in range(len(p)): yield p[i:] + p[:i] def permutations(l): if len(l)<=1: yield l else: for perm in permutations(l[1:]): for i in range(len(perm)+1): yield perm[:i] + l[0:1] + perm[i:] def constructs(l): for p in permutations(l): for c in product(*(rotations(x) for x in p)): yield c However, note that the puzzle has only ~0.2G unique possible solutions, as you must divide the total number of possibilities by 6 since each possible solution is equivalent to 5 other solutions (simply rotate the entire puzzle by 1/6 a turn). Is there a better way to generate only the unique possibilities for this puzzle?

    Read the article

  • Seperating two graphs based on connectivity and coordinates

    - by martin
    I would like to separate existing data of vertices and edges into two or more graphs that are not connected. I would like to give the following as example: Imagine two hexagons on top of each other but are lying in different Z. Hexagon 1 has the following vertices A(0,0,1), B(1,0,2), C(2,1,2), D(1,2,1), E(0,2,1), F(-1,2,1). The connectivity is as following: A-B, B-C, C-D, D-E, E-F, F-A. This part of Graph 1 as all the vertices are connected in this layer. Hexagon2 has the following vertices A1(0,0,6), B1(1,0,7), C1(2,1,7), D1(1,2,8), E1(0,2,7), F1(-1,2,6). The connectivity is as following: A1-B1, B1-C1, C1-D1, D1-E1, E1-F1, F1-A1. This is part of Graph 2 My data is in the following form: list of Vertices and list of Edges that i can form graphs with. I would like to eliminate graph 2 and give only vertices and connectivity of graph 1 to polygon determination part of my algorithm. My real data contains around 1000 connected polygons as graph 1 and around 100 (much larger in area) polygons as graph 2. I would like to eliminate graph 2. I am programming this in python. Thanks in advance.

    Read the article

  • How do I install an HP home-use printer on Windows Home Server (Windows Server 2003)

    - by Rob Allen
    I have an HP DeskJet F4210 printer that I would like to share on my network via Windows Home Server. Unfortunately, the driver installation checks for supported OS's, detects Home Server as Windows Server 2003 and exits. The driver install supports WinXP, W2k, Vista, and Win98SE. In theory, drivers for XP or Windows 2000 should work fine with Home Server. When using the "Install Printer" tool in Home Server I am only able to select .inf files (there are serveral on the install media) but the driver folders for XP and 2000 have .sys and .dll files. How can I bypass HP's short-sighted install program and get this printer up and running on Home server? I'll be happy with basic print functionality and will save the task of enabling scanning for another time.

    Read the article

  • Always a path to the internet even in Windows SBS is off

    - by Mark
    Hello all, is it possible to have a configuration in a Windows 2003 SBS environment where in the event that the SBS box crashed/turned off/ or is being worked on that there can still exist a path to the internet for domain users and visitors to still use? I would like to have the standalone router issue DHCP IPs. The primary DNS would point to the SBS, the secondary wouuld point to the ISP DNS Server. My theory was that if someone was using the internet and the SBS box went down they wouldn't be able to access the network shares but still be able to use the internet. (We are moving everything into the clouds with Google Apps Non-Profit) Does this seem like a reasonable configuration? Or are they're pitfalls that I will fall into? Thanks Mark

    Read the article

  • USB hard drive doesn't graceful power off after eject on Windows 7

    - by Sim
    I have a couple of Seagate FreeAgent Go external USB hard drives and would like them to gracefully power off after ejecting in Windows 7. With Windows XP a few seconds after they are ejected they gracefully power off. When ejecting them on Windows 7 they just stay on and have to be physically disconnected before they lose power. I have checked the hard drive removal policy and it is set to quick removal. I have also looked in the Seagate forums but I couldn't find any info on this so I thought I'd ask the SuperUser community on any ideas why the difference and how to get the same behaviour in Windows 7 as in XP? Update: I am finding that this also happens with USB thumb drives as well. My current theory is that there were changes to the driver model with Vista/Win 7 that haven't been reflected in the device drivers yet. So things that worked under XP don't under Win7 as the drivers haven't been updated for the new model. Does that sound right?

    Read the article

  • What kind of storage do people actually use for VMware ESX servers?

    - by Dirk Paessler
    VMware and many network evangelists try to tell you that sophisticated (=expensive) fiber SANs are the "only" storage option for VMware ESX and ESXi servers. Well, yes, of course. Using a SAN is fast, reliable and makes vMotion possible. Great. But: Can all ESX/ESXi users really afford SANs? My theory is that less than 20% of all VMware ESX installations on this planet actually use fiber or iSCS SANs. Most of these installation will be in larger companies who can afford this. I would predict that most VMware installations use "attached storage" (vmdks are stored on disks inside the server). Most of them run in SMEs and there are so many of them! We run two ESX 3.5 servers with attached storage and two ESX 4 servers with an iSCS san. And the "real live difference" between both is barely notable :-) Do you know of any official statistics for this question? What do you use as your storage medium?

    Read the article

  • Is email encryption practical enough?

    - by Dimitri C.
    All emails I have ever sent were sent as plain text. Like postcards, everybody on the way to the addressee could easily read and store them. This worries me. I know privacy is something of the past, but encrypting email is possible, at least in theory. However, I wonder whether it is practical enough. Is there anybody who has experience with email security? Is it easy to set up? And can you still send and receive email from all you friends and acquaintances?

    Read the article

  • Windows Server 2008, IIS7 and Windows Authentication

    - by Chalkey
    We currently have a development server set up which we are trying to test some Windows authentication ASP.NET code on. We have turned on Windows Authentication in IIS7 on Windows Server 2008 R2 fine, and it asks the user for a username and password as excepted, but the problem is it doesn't appear to accept any credentials. This code for example... Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Page.Title = "Home page for " + User.Identity.Name End Sub ...always returns an empty string. One theory we have is that we dont have Active Directory installed as of yet, we are just testing this by logging on via the machine name not a domain. Is this type of authentication only applicatable to domains (if so we can probably install Active Directory and some test accounts) - or is it possible to get the user identity when logging in using the machine name? Ideally we would like to be able to test this on our local machines (Windows 7 Pro) using our own accounts (again these aren't on a domain) and IIS but this has the same issue as our dev server. Thanks,

    Read the article

  • Secure PHP environments with PHP-FPM and SFTP

    - by pdd
    I'd like to set up secure environments for a small number of untrusted PHP websites on a Debian server. Right now everything runs on the same Apache2 with mod_php5 and vsftpd for administrative file access, so there is room for improvement. The idea is to use nginx instead of apache, SFTP through OpenSSH instead of vsftpd and chrooted (in sshd_config), individual users for each website with their own pool of PHP processes. All these users and nginx are part of the same group. Now in theory I can set 700 permissions on all PHP scripts and 750 on static files that nginx has to serve up. Theoretically, if a website is compromised all the other users' data is safe, right? Are there better solutions that require less setup time and memory per website? Cheers

    Read the article

  • Windows shutdown processes termination sequence

    - by jpmartins
    I've seen today an wierd situation. I have a theory, but it would help to know more about the windows shutdown process. If you have some knowlaged about it please share. A machine was shutdown (at this moment I suspect an unexpected mantainace), on that machine there was a long running process that was interrupted. Monitorization confirms that the process did not terminated normally. Loking at the logs for the long running process it seem that was just finishing. That seems higly unprobable since it was running for more than 6 hours (witch is a bit more than the usual 5 hours). The process lanches child processes and waits for results from them, I suspect pour error control on the parent process and that the shutdown as terminated child processes before.

    Read the article

  • How to install two packages that write the same file

    - by Joel E Salas
    Sirs, a conundrum. I have two packages that each create /usr/bin/ffprobe. One of them is ffmpeg from the Deb Multimedia repository, while the other is ffmbc 0.7-rc5 built from source. The hand-rolled one is business-critical, and we used to just install it from source wherever it was necessary. I can only assume it would clobber the ffmpeg file, and there were never any ill effects. In theory, it should be acceptable for our ffmbc package to overwrite the file from the ffmpeg package. Is there any easy way to reconcile this?

    Read the article

  • PFSENSE and IPV6 , direct connect rules

    - by Bgnt44
    My question is about pfsense configuration for ipv6 In theory Ipv6 are fully routable even in a LAN For stating point i ve Using this tutorial : http://doc.pfsense.org/index.php/Using_IPv6_on_2.1_with_a_Tunnel_Broker So my Lan network has ipv4 connection and ipv6 I would like to be able to access my LAN machines by their IPV6 i'm confused with firewall rules which i need to set to be able to do that Even if i set all interfaces to pass all packets, i'm not able to directly access any machine by their IPV6 Did i miss something ? Edit : Ok i found that it work now, think it has always work but my isp seems to support ipv6 sometimes and sometimes not ... weird

    Read the article

  • SSL certificate for ISAPI redirected Web Site

    - by Daniel
    I have a Win 2003 server and I'm using Ionics Isapi Rewrite Filter to redirect requests made to a Web Site configured in IIS to another Apache2 Server in a server not exposed to the Internet. The Web Site has its host headers configured to catch requests for the specific site, and the redirection is being done with the ProxyPass directive. This is working OK. So far the scenario, my question is: I'd like to add a server certificate to the Apache server, but I don´t know if I need to add the certificate to both Apache and IIS sites. I think I still don´t get the theory behind this and would like to know from someone with expertise in the field the right way to implement this. Thank you in advance.

    Read the article

  • limit linux background flush (dirty pages)

    - by korkman
    Background flushing in linux happens when either too much written data is pending (adjustable via /proc/sys/vm/dirty_background_ratio) or a timeout for pending writes is reached (/proc/sys/vm/dirty_expire_centisecs). Unless another limit is being hit (/proc/sys/vm/dirty_ratio), more written data may be cached. Further writes will block. In theory, this should create a background process writing out dirty pages without disturbing other processes. In practice, it does disturb any process doing uncached reading or synchronous writing. Badly. This is because the background flush actually writes at 100% device speed and any other device requests at this time will be delayed (because all queues and write-caches on the road are filled). Is there any way to limit the amount of requests per second the flushing process performs, or otherwise effectively prioritize other device I/O?

    Read the article

  • How would I / could I obtain an reasonably comprehensive list of domain names?

    - by Simon
    I know that domain names are constantly changing, and I know there are a lot of them, but there is clearly a region of the domain name space which is stable. How would I go about getting a list, even a very big one? Such a thing must logically exist, even if it is in a distributed form, because the web's DNS servers resolve names to IP addresses. So in theory if I could poll all the DNS servers in the world at a moment in time I would have the complete list of mapped names. Is there a practical way of doing that? As an aside, does anyone have any good estimates of how many domain names exist at the moment?

    Read the article

  • Is mismatched firmware on drives in a raid-6 a bad thing?

    - by bwerks
    Hi all, I recently expanded a raid-1 to a raid-6 with six drives. I ordered all four of the new drives from the same place, and all of them were advertised to be the same drives as the original two--Seagate 15krpm 146gb. However, when I was looking at the drives in the perc6/i utility, one of them appeared to be an earlier firmware version; it had S515, compared to the other five drives with S527. Sure enough, after inspecting the drive itself, the label advertised the earlier firmware version. Running Dell's SAS firmware upgrade utility should have in theory moved them all up to S52A, but when I ran it it moved the S527 drives up to S52A, and left the S515 drive untouched. Is this something to be worried about? If it's something that should be corrected, is there a way to target a particular drive for upgrade since the firmware utility didn't seem to do it by itself?

    Read the article

  • Is there a way to bridge two outgoing TCP connections in order to bypass firewalls and NAT?

    - by TK Kocheran
    We're all familiar with the problem of port-forwarding and NAT: if you want to expose something to accepting an incoming connection, you need to configure port-forwarding on the router or conjure up some other black magickery to "punch holes" in the firewall using UDP or something. I'm fairly new to the whole "hole-punching" concept so could someone explain how it works? Essentially, I'd like to understand how hole-punching would work and the theory behind it, as well as if two TCP connections could be bridged via a third party. Since there's no issue with outgoing TCP connections since it's handled with NAT, could a third party bridge the connections so that the two parties are still connected but without the bandwidth cost of traffic going through the third party?

    Read the article

  • Does Lapping a CPU / Heatsink actually drop the temp?

    - by Pure.Krome
    Hi folks, i've been watching some YouTube vids about Lapping a CPU. I've never heard of this modding technique before and, though extreame, I was wondering if it acutally works? Assuming you lap your cpu and/or heatsink correctly, will the temps drop? When I say drop, at least a 1 degree drop is success (for the debate of this topic). To keep this topic clean, please refrain from anyone commenting on the overkill of labour, just for a 1 degree (worst case) drop, etc. This is a discussion about the theory and concept, not personal opionion of wether to lap or not.

    Read the article

  • Apache vs Lighttpd: Weird behavior in reverse proxy mode.

    - by northox
    Context: I have an Apache server running in reverse proxy mode in front of a Tomcat java server. It handle HTTP and HTTPS and send those request back and forth to the Tomcat server on an internal HTTP port. Goal: I'm trying to replace the reverse proxy with Lighttpd. Problem: while asking for the same HTTPS url, while using Apache as the reverse proxy, the Tomcat server redirect (302) to an HTTPS page but with Lighttpd it redirect to the same page in HTTP (not HTTPS). Question: What does Lighttpd could do different in order to have a different result from the backend server? In theory, using Apache or Lighttpd server as a reverse proxy should not change anything... but it does. Any idea? I'll try to find something by sniffing the traffic on the backend tomcat server.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >