Search Results

Search found 29502 results on 1181 pages for 'line segment'.

Page 704/1181 | < Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >

  • Browser-based GUI for a python application

    - by ack__
    I want to create a web/browser-based GUI for a command-line python application. The goal is to make use of HTML/JS technologies to create this GUI. As the application itself, it needs to run on Linux and Windows, and the interface will be accessible only from localhost (not exposed to internet). The GUI will contain 5 to 10 pages. I don't want a traditional desktop GUI that includes HTML/JS, but just a bunch of html files and some kind of controller between those and the application. I also want to make use of asynchronous programming (ajax like) so I can load and print data in the GUI without refreshing the whole page. I'd probably use jQuery for that and a couple other things. How would you recommend to design this? Performance is not the key here, I'm rather looking at reliability, portability and simplicity. I'm thinking of using a lightweight python HTTP server / framework (like CherryPy) and maybe later a Python templating system (at the begining it will just be a couple pages). EDIT: I'm looking for ideas/recommendations how to build this, not for alternatives to browser/web-based GUI.

    Read the article

  • Help, broken Gsettings

    - by Rene
    I was trying to disable the global menu as per http://ubuntuhandbook.org/index.php/2013/07/disable-global-menu-on-ubuntu-13-10-saucy/#comment-8612, but while it didn't change anything, after running the autoremove command unity-tweak-tool broke. Obviously my first reaction was to re-install the removed package but it remains broken. TBH I don't know if it is even related or just a coincidence. When I start it from the launcher it just blinks and disappear. When I start it from terminal I get this error: $ gnome-tweak-tool WARNING : Shell not installed or running WARNING : Error detecting shell Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell_extensions.py", line 199, in __init__ raise Exception("Shell not running or DBus service not available") Exception: Shell not running or DBus service not available INFO : GSettings missing key org.gnome.nautilus.desktop (key computer-icon-visible) WARNING : Shell not running None INFO : GSettings missing key org.gnome.mutter (key workspaces-only-on-primary) Segmentation fault (core dumped) I had a look with dconf-editor if I could just add the missing key, but apparently keys aren't meant to be added "by hand". So how can I fix this? I'd rather prefer not having to reinstall everything. Which package is broken, can I just reinstall that? EDIT: I found by being root gnome-tweak-tool no longer crashed so possibly a permission issue somewhere. I don't know that I changed any permissions. Another related problem, actually the reason I noticed the problem at all, is that unity-tweak-tool seem no longer to want to save the values edited. I normally just have the Unity launcher on the primary display but wanted to check what it was like having it on both. I didn't like it so I went into unity-tweak-tool to set it back - but regardless how many time I tick "only primary display" it never changes anything. What does the Unity-tweak-tool actually change and can I do this directly somehow?

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • JDBC Connection Pools in Glassfish

    - by Dana Singleterry
    I've been attempting to configure Glassfish 3.1.2.2 for ADF 11g and the need arose to create a jdbc connection pool to my Oracle XE 11g database. While this is really very trivial there were no samples of how to do this and documentation, while good, rarely ever provides concrete examples. After fumbling around for a few minutes searching for an example I gave up and figured it out on my own. Here are the steps for any of you that may be in need. This can be done either via the Glassfish command line tool asadmin or through the admin console. I'm doing this through the admin console. Start Glassfish and connect to the admin console with the credentials you defined at installation: http://localhost:4848 Navigate to Resources | JDBC | JDBC Connection Pools and select New. Be sure to enter Resource Type & Datasource Classname under General Settings tab. You can go with the defaults for Pool Settings etc... View Image Go to the Additional Properties tab and create username, password, and url properties with the respective values. View Image Navigate to Resources | JDBC | JDBC Resources and select New. Be sure to enter the JNDI Name and select the Pool Name for the jdbc connection pool you created previously. View Image Navigate to Configurations | server-config | JVM Settings and select the JVM Options tab. Add the values highlighted: -Doracle.jdbc.J2EE13Compliant=true is used to make sure the driver behaves in a JEE-compliant manner. View Image To integrate the JDBC driver into a GlassFish Server domain, copy the JAR files into the domain-dir/lib directory, then restart the server. The JAR file for the Oracle 11 database driver is ojdbc6dms.jar. An upcoming entry will demonstrate configuring Glassfish for Oracle ADF Applications.

    Read the article

  • SyncToBlog #11 Stuff and more stuff

    - by Eric Nelson
    Just getting more stuff “down on paper” which grabbed my attention over the last couple of weeks. http://www.koodibook.com/ is live. This is a a rich desktop application built in WPF by some ex-colleagues and current friends :-) Check it out if “photo books” is your thing or you like sweet WPF UX. Study rates Microsoft .NET Framework rated top, Ruby on Rails 2nd bottom. I know a bit about both of these frameworks. Both are sweet for different reasons. .NET top. Ok – I liked that. But Ruby on Rails 2nd bottom just blows away the credibility of the survey results for me. Stylecop is going Open Source. Sweet. ”…will be taking code submissions from the open source community” VMforce for running Java in the cloud. Hmmmmm… Windows Azure Guidance Code and Docs available on patterns and practices. Download both zip files. – One is just the code and the other is 7 chapters of the guide to migration. UK Architect Insight Conference post event presentations are here including a full day track of cloud stuff. http://uxkit.cloudapp.net/ This appears to be a well-kept secret but the Silverlight Demo Kit is on-line in Windows Azure. You already knew! Ok – just me then :-) 3 day Silverlight Masterclass training in the UK from people I trust and like :-) http://silverlightmasterclass.net/ (£995) SQL Server Driver for PHP 2.0 CTP adds PHP's PDO style data access for SQL Server/SQL Azure A Domain Oriented N-Layered .NET 4.0 App Sample from Microsoft Spain. Not looked at it yet – but had it recommended to me (tx Torkil Pedersen) You might also want to check out delicious stream – a blur of azure, ruby and gaming right now http://delicious.com/ericnel :-)

    Read the article

  • Integrating different branches from external sources into a single Mercurial repository

    - by dukeofgaming
    I'm currently working in a company using Perforce and am making way for distributed version control with Mercurial. I've had success importing Perforce history using the perfarce (quite a suitable name, I laugh every time I see/say it) however, this only works with a single branch at a time. Here's how my P4 integration setup works: In perforce, create a "client", which is kind of a description of what you will be constantly updating/checking-out. This can only address one branch at a time (trunk or other). Once you do this, run hg clone p4://<server>/<client_name> Go to .hg/hgrc and put the perforce path line: perforce = p4://<server>/<client_name> Work normally with the code under mercurial, do hg pull perforce to sync up, hg push to export a changelist What I'd like to be able to do is have a perforce path per branch and have everything work in the same repository. Now, pushing is not a problem, however, if I pull the history from another branch it would end up at the default branch. I'd like to be able to do something like hg pull perforce-R5 and have it land in mercurial's R5 branch. Even if I have no merging history, it would be sweet enough to be able to preserve it. There are also other plugins for CVCSs that let you integrate mercurial, but AFAIK the subversion one has the same problem. I don't think there is a straight-through way of doing this, but as long as I could automate the process with some hooks and scripts in a single Mercurial machine, that would be good enough.

    Read the article

  • SQL SERVER – Two Puzzles – Answer and Win USD 25 Gift Card

    - by pinaldave
    Today I have two simple T-SQL Puzzle. You can answer them and win USD 25 Gift card. The gift card will be sent in email to winner. You will get choice of Gift Card brand based on your preference and country location. Puzzle 1: What will be the outcome and why? DECLARE @x REAL; SET @x = 9E-40 SELECT @x; The outcome here is obvious as I have used negative number in assignment. What is the reason behind the same? Puzzle 2: Why will be the outcome different from Puzzle 1: DECLARE @y REAL; SET @y = 9E+40 SELECT @y; The outcome of this puzzle very different from puzzle 1  as I have used positive number. There is number six (6) in the resultset why? Msg 232, Level 16, State 2, Line 2 Arithmetic overflow error for type real, value = 90000000000000006000000000000000000000000.000000. How to participate To win the Gift Card USD 25 you will have to answer both of the question on my Facebook page. If you are on twitter – you can increase the chance of winning by tweeting your participation. This contest is open for any one from any country. The winner will be selected Randomly. Winner will be announced on July 7, 2011. Related Post: SQLAuthority News – Monthly list of Puzzles and Solutions on SQLAuthority.com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • C# with keyword equivalent

    - by oazabir
    There’s no with keyword in C#, like Visual Basic. So you end up writing code like this: this.StatusProgressBar.IsIndeterminate = false; this.StatusProgressBar.Visibility = Visibility.Visible; this.StatusProgressBar.Minimum = 0; this.StatusProgressBar.Maximum = 100; this.StatusProgressBar.Value = percentage; Here’s a work around to this: With.A<ProgressBar>(this.StatusProgressBar, (p) => { p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; }); Saves you repeatedly typing the same class instance or control name over and over again. It also makes code more readable since it clearly says that you are working with a progress bar control within the block. It you are setting properties of several controls one after another, it’s easier to read such code this way since you will have dedicated block for each control. It’s a very simple one line function that does it: public static class With { public static void A<T>(T item, Action<T> work) { work(item); } } You could argue that you can just do this: var p = this.StatusProgressBar; p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; But it’s not elegant. You are introducing a variable “p” in the local scope of the whole function. This goes against naming conventions. Morever, you can’t limit the scope of “p” within a certain place in the function.

    Read the article

  • Instructor Insight: Using the Container Database in Oracle Database 12 c

    - by Breanne Cooley
    The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB. Bundling Databases Inside One Container In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software. Minimizing Storage Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource. Improve Security Levels within Each Pluggable Database  We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.  The bottom line: it's a good idea to at least consider using a CDB. -Christopher Andrews, Senior Principal Instructor, Oracle University

    Read the article

  • Password Management for Oracle WebLogic customers

    - by Anthony Shorten
    One of the most common requests for enhancements I get across my desk is that customers wish to allow end users to change their passwords from our products. Now, typically password management is not in the realm of individual applications but it is an infrastructure requirement, so we don't usually add this to our roadmaps by default. The issue is that with the vast range of security stores that can be used with our product line across the Web Application Servers we support, it is almost impossible to come up with a generic enough API to work across them. If you have a specific security store on a specific Web Application Server platform then there are simpler solutions. There are a number of ways of implementing this without providing functionality specific functionality: Oracle sells Identity Management software that offers common API's to manage passwords. You can purchase those products and link to the password change dialog in those products using Navigation Keys. If you are a customer using Oracle WebLogic, then there is a sample JSP's that can be linked to provide this functionality under Oracle TechNet (registration required) under Code Samples (project S20). These can be added as a Navigation Key to complete the functionality. This will allow end users to manage their own passwords. Obviously these are all samples and should be treated as customizations when you implement them. If you wish to understand Navigation Keys, then look at the Oracle Utilities Application Framework Integration Guidelines (Doc Id: 789060.1) available from My Oracle Support.

    Read the article

  • OpenVPN, Server 12.04, connect to machines in home LAN behind VPN server

    - by inexion
    Problem: I've set up a working OpenVPN server, and am able to connect to it from anywhere using my mac laptop and tunnelblick. When I connect in, I'm assigned an IP address of 10.8.0.x, the server is 10.8.0.1, so I have no problems SSHing into it. Once SSHd in, I can even ping other machines (obviously) on my home network (192.168.1.x). Desired outcome: What I want, is, to connect to the VPN server, and instead of getting a 10.8.0.x address, I get a 192.168.1.x on my home network. I can't figure out how to talk to the OTHER machines on my home network WITHOUT being SSHd in to the VPN server. I'd like to just connect to my VPN server, then be a part of my home network. Attempted solutions: I've read that I need to set up routes, and/or enable IP forwarding. I enabled IP forwarding using sudo sysctl -w net.ipv4.ip_forward=1 and that doesn't seem to have done anything. I've also uncommented a line in the OpenVPN's server.conf file: # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 192.168.1.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" But still no luck, I still get a 10.8.0.x address... I've also read I may have to add routes to the router itself, but haven't tried that. Any help appreciated, thanks!

    Read the article

  • Mount Ubuntu shares remotely with Mac and Windows

    - by Donald
    First time on here so please be gentle! I have setup a small school network with a Ubuntu 12.04 Server for use as a fileserver mainly. I have managed to set the server up (all command line based - no GUI) and setup the Samba shares, which works really well internally. Internally, the school have a combination of Mac's and Windows machines and they can all access the shares happily. The school has a fixed IP ADSL connection and I have added a route in the router to allow me remote access to the server using SSH (port 22). That also works well. All good so far! What I now want to do is allow remote access to the shares. I have done a bit of reading and thought I had found the answer with SSHFS but I am still non-the-wiser. So, my basic questions is: In Windows, how can I map to the Ubuntu shares across the internet through the router? In Mac OS, how can I add the remote share across the internet? The school used to have a Windows server and the users were used to creating a VPN and then pulling up the share folders etc, but I'm unsure how to do this with the Ubuntu server. I assume I need to add another route through the router too allow for SSHFS or something similar? Thanks in advance... Donald

    Read the article

  • Friday Fun: Christmas Tree Light Up

    - by Asian Angel
    Another week has thankfully passed by, so it is time to take a break and have some fun. This week’s game tests your ability to light up the whole Christmas tree…can you figure out the correct wiring configuration? Christmas Tree Light Up The object of the game is simple…light up all of the bulbs on the Christmas tree. While the game may look quick and easy at first you will need to do some thinking and experimenting to come up with the correct wiring configuration. The instructions are very simple…just click on any of the wiring sections or bulbs to rotate them. Keep in mind that you may have to click a few times to line the wiring sections or bulbs up as desired since the rotation is always clockwise. Note: You will need use all of the wiring sections available to completely light the tree up. Each time you will be presented with a different starting setup coming from your power source. Time to hook up the lights! Note: It is recommended that you disable the sound for the game since the “rotation” sounds can be slightly irritating. A nice start but there are still a lot of bulbs to light up. Getting closer… Almost there…only two more bulbs to light up. Success! Have fun playing! Play Christmas Tree Light Up Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • When too much encapsulation was reached

    - by Samuel
    Recently, I read a lot of gook articles about how to do a good encapsulation. And when I say "good encapsulation", I don't talk about hiding private fields with public properties; I talk about preventing users of your Api to do wrong things. Here is two good articles about this subject: http://blog.ploeh.dk/2011/05/24/PokayokeDesignFromSmellToFragrance.aspx http://lostechies.com/derickbailey/2011/03/28/encapsulation-youre-doing-it-wrong/ At my job, the majority a our applications are not destined to other programmers but rather to the customers. About 80% of the application code is at the top of the structure (Not used by other code). For this reason, there is probably no chance ever that this code will be used by other application. An example of encapsulation that prevent user to do wrong thing with your Api is to return an IEnumerable instead of IList when you don't want to give the ability to the user to add or remove items in the list. My question is: When encapsulation could be considered like too much of purism object oriented programming while keeping in mind that each hour of programming is charged to the customer? I want to do good code that is maintainable, easy to read and to use but when this is not a public Api (Used by other programmer), where could we put the line between perfect code and not so perfect code? Thank you.

    Read the article

  • Important Tips to make a user friendly Homepage

    - by Aditi
    We have done a lot of redesign work lately for many online businesses.. There are some basic things we want to stress into any web design which make it usable! Unfortunately most designers care about the graphic elements, clean code & navigation only. Most designers do not realize that a design they did could be a masterpiece to them but for a layman it could be tough to use. It is so very important to understand usability & call to action for any business and then implemented on their website Homepage. Showcase Offering The sole purpose of their websites needs to be mentioned on the homepage, what they are offering, why should one do business with them and why they are better than other competitors. Include a tag line under your logo that explicitly summarizes what the site or company does. Ease of Navigation Make it easier for your user to find what they are looking for, some archives, articles, services etc. A visible and easy to use Navigation allows just that. You may also want to feature some of the most visited content on the homepage itself. Search Form Let your users take advantage of a search form from your homepage, keep it visible enough so if they have trouble locating certain content using your navigation. The Search Bar will come handy in such situation. Especially if you have thousands and thousands of web pages. Liquid Layout There was a time when people used standard resolution, not any more. People have different screen sizes and most people now browse on handhelds. Try keeping liquid width so people can adjust as per their choice. First impression is last impression, leaving one through your perfect homepage can do wonders for your business.

    Read the article

  • Current Technologies

    - by Charles Cline
    I currently work at the University of Kansas (KU) and before that Stanford University, to be particular the Stanford Linear Accelerator Center (SLAC).  Collaborating with various Higher Ed institutions the past several years has shown a marked increase in the Microsoft side of the house.  To give you an idea of our current environment, here are some of the things we (Enterprise Systems) have been working on the past two years I’ve been at KU: Migrated from Novell to Active Directory (AD), although we’re still leveraging Novell for IDM.  We currently have 550,000+ objects in AD, and we still have several departments to bring in. Upgraded from Exchange 2003 to Exchange 2010 and Forefront Online Protection for Exchange (FOPE) Implemented SCCM 2007 for Windows systems management Implemented central file storage using EMC products for the backend, using CIFS as the frontend Restructuring AD domains and Forests to decrease the administrative overhead and provide a primary authentication mechanism for the entire University Determining Key Performance Indicators for AD and Exchange Implemented SCOM 2007 to monitor AD and Exchange Implemented Confluence for collaboration within IT and other technology providers at the University Implemented Data Protection Manager (DPM) for backup of AD and Exchange Built a test and QA environment to better facilitate upcoming changes to the environment Almost ready to raise the AD domain level to 2008 R2   I’m sure I’m missing things, and my next post will be some of the things we’re getting ready for – like Centrify to provide AD for OS X and Linux systems.  If anyone would like more info on a particular area, please drop me a line.  I’d be happy to discuss.

    Read the article

  • Where to Perform Authentication in REST API Server?

    - by David V
    I am working on a set of REST APIs that needs to be secured so that only authenticated calls will be performed. There will be multiple web apps to service these APIs. Is there a best-practice approach as to where the authentication should occur? I have thought of two possible places. Have each web app perform the authentication by using a shared authentication service. This seems to be in line with tools like Spring Security, which is configured at the web app level. Protect each web app with a "gateway" for security. In this approach, the web app never receives unauthenticated calls. This seems to be the approach of Apache HTTP Server Authentication. With this approach, would you use Apache or nginx to protect it, or something else in between Apache/nginx and your web app? For additional reference, the authentication is similar to services like AWS that have a non-secret identifier combined with a shared secret key. I am also considering using HMAC. Also, we are writing the web services in Java using Spring. Update: To clarify, each request needs to be authenticated with the identifier and secret key. This is similar to how AWS REST requests work.

    Read the article

  • Identity Management as a Controls Infrastructure

    - by Darin Pendergraft
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Identity systems are indispensable to managing online resources, and are becoming increasingly more complex as businesses adapt their current infrastructures to support a broad user population across a wide range of devices. Adding point products to solve problems addresses the short term need, but complicates the longer term management outlook. Download the latest whitepaper HERE to see how Oracle is taking a platform approach to building a scalable and secure controls infrastructure that enables businesses to engage customers and gives employees secure access to corporate resources from anywhere.

    Read the article

  • Clean URLs issue using .htaccess in PHP project

    - by x4ph4r
    I am working on a PHP laravel project. I am currently facing issues with .htaccess file. I have following .htaccess: <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> When I reload my page the it gave me following error: 404 Not Found The requested URL /contacts was not found on this server. Then I opened /etc/apache2/users/username.conf file which had following line of code: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> In above code I changed AllowOverride None to AllowOverride All. Then I reload page and got following error: 403 Forbidden You don't have permission to access /contacts on this server. When I add FollowSymLinks to .htaccess file Options such as like this Options -MultiViews FollowSymLinks. Then sometimes I get this 500 Internal Server Error error and sometime this *Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data*. Each time I reload my page one of these errors with FollowSymLinks option. I also uncomment following lines in /etc/apache2/httpd.conf: LoadModule rewrite_module libexec/apache2/mod_rewrite.so LoadModule php5_module libexec/apache2/libphp5.so and still I am getting same permission denied error. Please help me I am trying to solve this problem for past 3 days but it is till unresolved.

    Read the article

  • How to configure SoapUI with client certificate authentication

    - by gvdmaaden
    SoapUI is one of the best free tools around to test web services. Some time ago I was trying to send a soap message towards a SSL web service that was set up for client certificate authentication. I pretty soon got stuck at the “javax.net.ssl.SSLException: HelloRequest followed by an unexpected handshake message” error, but after reading several posts on the internet I solved that issue. It’s not really that complicated after all, but since I could not find a decent place on the internet that explains this scenario in a proper way, here’s a list of steps that you need to do to make it work. Note: this following steps are based on a Windows environment   Step one: Export your certificate (the one that you want to use as the client certificate) using the export wizard with the private key and with all certificates in the certification path: Give it a password (anything you want): And export it as a PFX file to a location somewhere on disk: Step two: Install the newest version of SOAP UI (currently it is 3.6.1) Open the file C:\Program Files\eviware\soapUI-3.6.1\bin\ soapUI-3.6.1.vmoptions and add this line at the bottom: -Dsun.security.ssl.allowUnsafeRenegotiation=true This is needed because of a JAVA security feature in their newest frameworks (For further reading about this issue, read this: http://www.soapui.org/forum/viewtopic.php?t=4089 and this: http://java.sun.com/javase/javaseforbusiness/docs/TLSReadme.html).   Open SOAPUI and go to preferences>SSL Settings and configure your certificate in the keystore (use the same password as in step one): That should be it. Just create a new project and import the WSDL from the client authenticated SSL webservice: And now you should be able to send soap messages with client certificate authentication. The above steps worked for me, but please drop a note if it does not work for you.

    Read the article

  • Issue with multiplayer interpolation

    - by Ben Cracknell
    In a fast-paced multiplayer game I'm working on, there is an issue with the interpolation algorithm. You can see it clearly in the image below. Cyan: Local position when a packet is received Red: Position received from packet (goal) Blue: Line from local position to goal when packet is received Black: Local position every frame As you can see, the local position seems to oscillate around the goals instead of moving between them smoothly. Here is the code: // local transform position when the last packet arrived. Will lerp from here to the goal private Vector3 positionAtLastPacket; // location received from last packet private Vector3 goal; // time since the last packet arrived private float currentTime; // estimated time to reach goal (also the expected time of the next packet) private float timeToReachGoal; private void PacketReceived(Vector3 position, float timeBetweenPackets) { positionAtLastPacket = transform.position; goal = position; timeToReachGoal = timeBetweenPackets; currentTime = 0; Debug.DrawRay(transform.position, Vector3.up, Color.cyan, 5); // current local position Debug.DrawLine(transform.position, goal, Color.blue, 5); // path to goal Debug.DrawRay(goal, Vector3.up, Color.red, 5); // received goal position } private void FrameUpdate() { currentTime += Time.deltaTime; float delta = currentTime/timeToReachGoal; transform.position = FreeLerp(positionAtLastPacket, goal, currentTime / timeToReachGoal); // current local position Debug.DrawRay(transform.position, Vector3.up * 0.5f, Color.black, 5); } /// <summary> /// Lerp without being locked to 0-1 /// </summary> Vector3 FreeLerp(Vector3 from, Vector3 to, float t) { return from + (to - from) * t; } Any idea about what's going on?

    Read the article

  • Dynamic real-time pathfinding with C# and unity

    - by Yakri
    A buddy and I are working on a simple 2D top down arena combat game similar to OpenGLAD (grew up on ye olde GLADIATOR). Thing is, we want to make some substantial deviation from our source of inspiration, including completely destructible/changeable terrain. Like rivers that can be frozen, walls which can be knocked down, etc. As well as letting players and NPC's build new terrain objects, some of which cannot be moved through or seen through. So I'm tasked with creating the AI, starting with pathfinding. Because of all the changeable terrain, we need something that can check to see if the player/other NPC's are in line of sight, and which can then check to find current paths around existing terrain, without getting completely confused by new terrain popping up, and old terrain vanishing, and even capable of breaking through terrain. A lot of that will just be filling in the framework of the feature, but I really just don't know where to start. What I'm really looking for are relevant websites, books, articles, or keywords to google. I just can't quite find a direction to start in, because most pathfinding types we've googled up just won't give us even the most basic level of robustness we need.

    Read the article

  • Could not calculate upgrade from Maverick Meerkat to Natty Narwhal

    - by xralf
    I upgraded from Ubuntu Lucid Lynx to Maverick Meerkat with the following commands: sudo apt-get update && sudo apt-get upgrade sudo apt-get install update-manager-core sudo vi /etc/update-manager/release-upgrades and changed the last line to Prompt=normal sudo do-release-upgrade -d This upgrade was OK. I decided to repeat the same steps and to upgrade Maverick Meerkat to Natty Narwhal. It ended with this message: Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: Can not mark 'xubuntu-desktop' for upgrade This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Nov 21 09:37:21 2011) === === Command terminated with exit status 1 (Mon Nov 21 09:37:21 2011) === How can I correct it?

    Read the article

  • Restoring GRUB2 on Software RAID 0 after Windows 7 wiped it using Ubuntu 10.10 LiveCD

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

< Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >