Search Results

Search found 5410 results on 217 pages for 'n tier architecture'.

Page 173/217 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • How to track conversion rate (clicks to sales) from an internal advertising system?

    - by Ed Woodcock
    I am currently writing an interal advertising system for a company client's website, where the adverts will only be seen by internal users, and all transactions take place internally to the site (i.e. the adverts are for member-only content available on the site). Does anyone have any recommendations as to the best way to track the conversion rate of these adverts (i.e. views:clicks:sales)? EDIT I'm not looking for a 'Why don't you use google analystics'-type answer, I'm looking into possible architecture outlines, i.e. a 'why don't use store a guid in a cache temporarily and see if it ties to the advert' kind of answer. /EDIT In a previous job I did something based on an internal cache, which simply did view:click tracking, however the addition of the sales rate makes this task more complex, especially if we take into account the idea that someone may click through to an advert and not purchase immediately. Cheers, Ed (N.B. I'm leaving this purposely vague in order to (hopefully) get some answers that provide ideas I've yet to have thought of by coming at the problem from a different angle)

    Read the article

  • 32/64 Bit Question

    - by user48408
    Here's my question. What is the best way to determine what bit architecture your app is running on? What I am looking to do: On a 64 bit server I want my app to read 64 bit datasources (stored in reg key Software\Wow6432Node\ODBC\ODBC.INI\ODBC Data Sources) and if its 32 bit I want to read 32 bit datasources, (i.e. Read from Software\ODBC\ODBC.INI\ODBC Data Sources). I might be missing the point, but I don't want to care what mode my app is running in. I simply want to know if the OS is 32 or 64 bit. [System.Environment.OSVersion.Platform doesn't seem to be cutting it for me. Its returning Win32NT on my local xp machine and on a win2k8 64 bit server (even when all my projects are set to target 'any cpu')]

    Read the article

  • Python: Give a class its own `self` at instantiation time

    - by SuperDisk
    I've got a button class that you can instantiate like so: engine.createElement((0, 0), Button(code=print, args=("Stuff!",))) And when it is clicked it will print "Stuff!". However, I need the button to destroy itself whenever it is clicked. Something like this: engine.createElement((0, 0), Button(code=engine.killElement, args=(self,))) However, that would just kill the caller, because self refers to the caller at that moment. What I need to do is give the class its own 'self' in advance... I thought of just making the string 'self' refer to the self variable upon click, but what if I wanted to use the string 'self' in the arguments? What is the way to do this? Is my architecture all wrong or something? Thanks.

    Read the article

  • Simulators for thread scheduling on multicore

    - by shijie xu
    I am seeking a simulator for thread scheduling at multi-core architecture, that is mapping threads to the cores at runtime. During runtime, simulator collects overall cache and IPC statistics. I checked below simulators, but seems there are not sufficient for me: Simplescalar: A simulator only for single core. SESC: multiprocessor simulator with detailed power, thermal, and performance models, QSim: provides instruction-level control of the emulated environment and detailed information about the executing instruction stream. It seems both SESC and QSim supports instructions scheduling instead of thread scheduling on the cores? Anyone can help provide some clues or share experience for this part?

    Read the article

  • How is the implicit segment register of a near pointer determined?

    - by Daniel Trebbien
    In section 4.3 of Intel 64® and IA-32 Architectures Software Developer's Manual. Volume 1: Basic Architecture, it says: A near pointer is a 32-bit offset ... within a segment. Near pointers are used for all memory references in a flat memory model or for references in a segmented model where the identity of the segment being accessed is implied. This leads me to wondering: how is the implied segment register determined? I know that (%eip) and displaced (%eip) (e.g. -4(%eip)) addresses use %cs by default, and that (%esp) and displaced (%esp) addresses use %ss, but what about (%eax), (%edx), (%edi), (%ebp) etc., and can the implicit segment register depend also on the instruction that the memory address operand appears in?

    Read the article

  • VC++ to C# migration guidelines/approaches/Issues

    - by KSH
    Hi all, We are planning to move few of our VC++ Legacy products to C# with .NET platform.. I am in the process of collecting the relavent information before making the proposal to give optimistic and effective approach to clients. Am looking for the following details. Any general guidelines in migration of VC++ to C#.NET What are the issues that a team can face when we take up this activity Are there any existing approaches available ? I believe many might have tried but may not have detailed information, but consolidating this under this would help not only me but anyone who look for these information. Any good / valid resources available on internet? Any suggestions from Microsoft team if any Microsoft people in this group? Architecture, components design approaches, etc. Please help me in getting these information, each penny would help me to gain good understanding.. Thanks in advance to those who will share their wisdom thorough this query.

    Read the article

  • Rotating Viewbox contents smoothly

    - by user204562
    I'm looking to teach myself better methods of doing things in WPF that I would normally do manually. In this case, I have a ViewBox with an image in it. I also have a button that uses a DoubleAnimation to rotate the image 90 to the right. This animation works fine, but obviously because it's square as it turns, the image does a "best fit" to the ViewBox which makes the rotation look quite bad, as it gets larger and smaller as its longest edge shrinks or grows to fit to that particular rotation angle. I am looking for any advice on the best way to handle this using appropriate WPF methods. Obviously I could do all the calculations manually, but I would be more interested in finding a way to use the controls and methods built into the .NET architecture. Thanks for your help.

    Read the article

  • How do I change the background image of my app?

    - by user356387
    Hello I have looked around and found some code which so called chooses an image from an array of image objects, but cant find an explanation. I would like to make my app have a background image and the user can select next and previous buttons to scroll through some full screen images, setting them as the background image as they scroll. I know how to do this in java, but cant seem to do it for this app. How or what code is linked to a next button to grab the next image in the array and reverse for the back button? Then that needs to be displayed obviously. I have used a layered architecture with a MVC style approach but cant seem to put it together with Objective-C. Would the buttons call the appropriate methods of the so called delegates, which the delegates handle fetching and returning the Images? Would the buttons use the returned images and actually handle the redrawing? I would really appreciate the help. Regards Jarryd

    Read the article

  • what to use for repetitive (daily, weekly, monthly) tasks ? Workflows, Windows Services, something e

    - by mare
    I've been writing Windows Services for a while and they always seem to work fine for things that need to run every day, few times a week, once a month, etc. but I've been lately thinking about going with Windows Workflow Foundation. However, I am unsure how would they run on a server without some container application (for instance SharePoint)? I worked with Sharepoint workflows before and I always had huge problems, at first with the bugs in the workflow architecture implementation (the problems with sleep and delay) and later when they eventually started to work, they were difficult to manage and change. On the other hand Windows Services were always quite easy to implement, easy to create a setup for them and install them and they were always quite resilient (they were often working for months without crashing or something else going wrong). What do you recommend? Please bear in mind we are working in .NET (version is of no problem, if 4.0 brings something new on this subject, we can use it).

    Read the article

  • c++ - what is faster ?

    - by VaioIsBorn
    If we have the following 2 snippets of code in c++ that do the same task: int a, b=somenumber; while(b > 0) { a = b % 3; b /= 3; } or int b=somenumber; while(b > 0) { int a=b%3; b /= 3; } I don't know much about computer architecture/c++ design, but i think that the first code is faster because it declares the integer a at the beginning and just uses it in the while-loop, and in the second code the integer a is being declared everytime the while-loop starts over. Can some one help me with this, am i correct or what and why ?

    Read the article

  • Slow draw on some apps and dynamic clocks not working properly with ATI/AMD proprietary drivers

    - by Rakeka
    I've recently purchased a new computer (around July 2010) and I've been having some problems with proprietary video drivers on Linux. The hardware is: Video: ATI/AMD Radeon HD 5870 (XFX HD-587X-ZNFC); Motherboard: Asus P7P55D-E Deluxe; Processor: Intel i5 750; Memory: Kingston Hyperx KHX1600C8D3K2/4GX (2x - 8GB Total); Power Supply: XFX P1-750B-CAG9; There are no overclocks, not even the memories (they are at 1333mhz due processor memory controller limitation). The operational system is a homebrew Linux distribution with the following software: Architecture: x86_64 (multilib) Kernel: 2.6.35.10 Xorg: 7.5 Window Manager: wmii-3.9.2 Video Driver: ATI/AMD Catalyst 10.12 There are no desktop effects programs like compiz fusion or beryl. The problems: With ATI/AMD proprietary driver, some applications are with slow draw/redraw, and, the same applications make the driver to increase the card clocks to maximum (0% gpu activity, only the clocks are increased). I dunno exactly how to describe the slow draw but I'll list some applications and symptoms. xterm Flickers a lot when drawing continuous output; When I'm in a workspace with fullscreen xterm, The gpu load stays at 12% in idle, and, with smaller xterm, smaller GPU load. "aticonfig --odgc" output: Default Adapter - ATI Radeon HD 5800 Series Core (MHz) Memory (MHz) Current Clocks : 157 300 Current Peak : 850 1200 Configurable Peak Range : [600-900] [900-1300] GPU load : 12% "aticonfig --pplib-cmd 'get activity'" output: Current Activity is Core Clock: 157MHZ Memory Clock: 300MHZ VDDC: 950 Activity: 12 percent Performance Level: 0 Bus Speed: 5000 Bus Lanes: 16 Maximum Bus Lanes: 16 More examples: mplayer time info flickers on terminal; "find /" flickers a lot (It takes some time to stop with control-c. But, If I change the workspace or put some window upon it, just after the control-c, it stops instantly); "cat somefile" if the file is big (Xorg.0.log for example) it takes some time to display; vim and less (ex: find / | less) don't have much problems, just a little flicker when scrolling; mplayer (no gui) Slow reproduction and seek with -vo x11; Tearing with -vo xv; Time info flickers on terminal (xterm consequence); gvim A little slow draw when scrolling with page up/page down; Firefox Slow draw/redraw on some pages like www.boadica.com.br and sometimes on www.youtube.com with flash enable (never noticed on many pages); Corruptions when informative yellow boxes are showing and scroll the page (an gray box appears at the same place of the informative box); "Wallpaper" After minimizing a fullscreen window or changing to an empty workspace it takes some time to redraw wallpaper. "Video Card" The core and memory clocks are increased with the events described above and on other situations like change workspace (even without wallpaper), minimize, maximize or move a window; Idle clocks: Core: 157mhz, Memory: 300mhz Full clocks: Core: 850mhz, Memory: 1200mhz xpdf Painful slow scrolling; display (from ImageMagick) Slow menus and sometimes slow image redraw; Programs that I use and are apparently without problems: gimp; pidgin; mplayer (-vo gl, gl2); blender; unigine heaven (better fps than on Windows); doom3; tibia; penumbra overture; amnesia the dark descent (wine); diablo 2 (wine); No problems on Windows (Windows 7 Ultimate 64bit). And special note to this: Full desktop effects from Debian and Ubuntu gnome appearance cpanel don't cause ANY problems, even the core and memory clocks don't increase when change workspace, minimize, maximize or move a window. What I've tested: Unsuccessful tests: Tested all drivers versions since 10.6 (released approximately when I've installed the first slackware in this PC); Tested other video card - ATI/AMD Radeon HD 5570 (XFX HD-557X-ZHF2); Tested some options on xorg.conf and that I've found googling (some of these options are commented on my xorg.conf. I'll send the links at the end of post); Tested some patches like 107_fedora_dont_fill_bg_none.patch and xserver-xorg-backclear.patch from Arch Linux Catalyst page (https://wiki.archlinux.org/index.php/ATI_Catalyst); Tested other distros and software versions: Tested XORG-7.6 on my own distribution; Tested Debian Squeeze (testing - from 2010-12-20); Tested Ubuntu Marverick (10.10); Tested Slackware 13.1; Distros info: Architecture: i386 Debian and Ubuntu with all default software (kernel, gnome, xorg, drivers); Slackware with Catalyst from AMD page and default window managers like: fvwm, xfce, and my own build of wmii; Successful tests: Tested other video card (only on my homebrew distro) - NVIDIA Geforce 7300GS with driver 260.19.29; That didn't shown the slow draw problems, but that card is a bit obsolete, so, dunno if that lacks features like the dynamic clocks. I don't dispose of other video cards like nvidia g/gt/gts/gtx 200~400~500 or Radeon HD 3000/4000/6000 to make more tests. Tested other hardware: Video: ATI/AMD Radeon HD 5570 (XFX HD-557X-ZHF2); Motherboard: Intel DG31PR; Processor: Core 2 Duo E6750; Software for that hardware: Fresh install of same distros (except for the mine) with same program versions; That video card (HD 5570) were full time at the maximum clocks (something like 500/750, don't remember) in all the operational systems (Windows XP and Windows 7 too), but it didn't shown the same problems that I have here. I've googled a lot about common problems with ATI/AMD proprietary drivers for Linux and didn't find similar problems, except by the Firefox corruptions, that the solutions were to disable ATI Direct2DAccel and use XAA. With XAA the problems persists and the other applications like pidgin and rest of Firefox showed the same problems of slow draw/redraw. Open source Drivers: With open source drivers (xf86-video-ati-6.13.2) I hadn't the same slow draw problems, but, had other problems, that, for now, make it no viable solution. I'll not discuss it here because this is another line of problems and will confuse everything. If it happens to be the only solution, I'll make another thread to discuss it. Logs and Configs: kernel .config dmesg xorg package list xorg.conf Xorg.0.log

    Read the article

  • IE7 not digesting JSON: "parse error" [resolved]

    - by Kenny Leu
    While trying to GET a JSON, my callback function is NOT firing. $.ajax({ type:"GET", dataType:'json', url: myLocalURL, data: myData, success: function(returned_data){alert('success');} }); The strangest part of this is that my JSON(s) validates on JSONlint this ONLY fails on IE7...it works in Safari, Chrome, and all versions of Firefox, (EDIT: and even in IE8). If I use 'error', then it reports "parseError"...even though it validates! Is there anything that I'm missing? Does IE7 not process certain characters, data structures (my data doesn't have anything non-alphanumeric, but it DOES have nested JSONs)? I have used tons of other AJAX calls that all work (even in IE7), but with the exception of THIS call. An example data return (EDIT: This is a structurally-complete example, meaning it is only missing a few second-tier fields, but follows this exact hierarchy)here is: {"question":{ "question_id":"19", "question_text":"testing", "other_crap":"none" }, "timestamp":{ "response":"answer", "response_text":"the text here" } } I am completely at a loss. Hopefully someone has some insight into what's going on...thank you! EDIT Here's a copy of the SIMPLEST case of dummy data that I'm using...it still doesn't work in IE7. { "question":{ "question_id":"20", "question_text":"testing :", "adverse_party":"none", "juris":"California", "recipients":"Carl Chan" } } EDIT 2 I am starting to doubt that it is a JSON issue...but I have NO idea what else it could be. Here are some other resources that I've found that could be the cause, but they don't seem to work either: http://firelitdesign.blogspot.com/2009/07/jquerys-getjson.html (Django uses Unicode by default, so I don't think this is causing it) Anybody have any other ideas? ANSWER I finally managed to figure it out...mostly via tedious trial-and-error. I want to thank everyone for their suggestions...as soon as I have 15 rep, I'll upvote you, I promise. :) There was basically no way that you guys could have figured it out, because the issue turned out to be a strange bug between IE7 and Django (my research didn't bring up any similar issues). We were basically using Django template language to generate our JSON...and in the midst of this particular JSON, we were using custom template tags: {% load customfilter %} { "question":{ "question_id":"{{question.id}}", "question_text":"{{question.question_text|customfilterhere}}" } } As soon as I deleted anything related to the customfilter, IE7 was able to parse the JSON perfectly! We still don't have a workaround yet, but at least we now know what's causing it. Has anyone seen any similar issues? Once again, thank you everyone for your contributions.

    Read the article

  • Best ASP.NET E-Commerce Framework (aspDotNetStoreFront, AbleCommerce, BVCommerce, MediaChase,...)

    - by EfficionDave
    I need a full-featured asp.net E-Commerce solution for a project. It needs to be easily extensible as I need to extend it to integrate with a custom Flash based personalization engine. It should also have a great administrative back-end that handles inventory tracking, report, labels, shipping, ... so that the client can really use it as the basis for their business. I've used quite a few different E-Commerce systems including OSCommerce, Zen Cart, Catalook, and eTailer. While I was impressed with Zen Cart and eTailer, this project is big enough that I want to try a more serious, commercial offering. I'm particularly interested in thoughts on aspdotnetstorefront, BVCommerce, AbleCommerce, and MediaChase. [UPDATE] I've done quite a bit of evaluation since I posted this question and will give some of my findings below: BVCommerce - The price point is nice, the product seems fairly well regarded, and I've heard good things about customizing/extending it, it just doesn't seem like it's going to meet my needs as an Top Tier ECommerce solution. After viewing the tutorial videos, the Admin interface just seems too basic and outdated. aspDotNetStorefront - The integration with DotNetNuke was it's biggest selling point for me but after reading peoples opinions, it's integration seems quite weak. It's sounds like there's a significant learning curve and the architecture just doesn't sound like it's as clean as it should be. AbleCommerce - After a rather thorough evaluation, I have selected AbleCommerce for at least my next project. The Admin interface is beautiful and has a nice list of features. The E-Commerce portion of the user side is very well done, highly skin-able (using Master Pages and a couple other techniques), and the checkout workflow has all the features I need while still being very clean. Source to the API can be bought for $500 but you generally won't need it. There are some 3rd party modules available but there's not currently a large market for these. The biggest glaring weakness of AbleCommerce I've found is their CMS capabilities are very limited and not at all well suited for a non-technical user to make most site content updates. But I have a good solution for that that I plan adapting into AbleCommerce. They will automatically generate a site for you to play with to your heart's content for 30 days. It is defniitely worth checking out.

    Read the article

  • How to track deleted self-tracking entities in ObservableCollection without memory leaks

    - by Yannick M.
    In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls. The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database. Self-Tracking Entities, as their name might suggest, track their state themselves. When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself. In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them. Something along the lines of: protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>(); protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection) { var deletedEntities = new List<TEntity>(); DeletedCollections[collection.GetHashCode()] = deletedEntities; collection.CollectionChanged += (o, a) => { if (a.OldItems != null) { deletedEntities.AddRange(a.OldItems.Cast<TEntity>()); } }; } Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along: ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers(); customers.RemoveAt(0); MyServiceProxy.UpdateCustomers(customers); At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side. This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary. I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection. But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution. Thanks!

    Read the article

  • How do you handle EF Data Contexts combined with asp.net custom membership/role providers

    - by KallDrexx
    I can't seem to get my head around how to implement a custom membership provider with Entity Framework data contexts into my asp.net MVC application. I understand how to create a custom membership/role provider by itself (using this as a reference). Here's my current setup: As of now I have a repository factory interface that allows different repository factories to be created (right now I only have a factory for EF repositories and and in memory repositories). The repository factory looks like this: public class EFRepositoryFactory : IRepositoryFactory { private EntitiesContainer _entitiesContext; /// <summary> /// Constructor that generates the necessary object contexts /// </summary> public EFRepositoryFactory() { _entitiesContext = new EntitiesContainer(); } /// <summary> /// Generates a new entity framework repository for the specified entity type /// </summary> /// <typeparam name="T">Type of entity to generate a repository for </typeparam> /// <returns>Returns an EFRepository</returns> public IRepository<T> GenerateRepository<T>() where T : class { return new EFRepository<T>(_entitiesContext); } } Controllers are passed an EF repository factory via castle Windsor. The controller then creates all the service/business layer objects it requires and passes in the repository factory into it. This means that all service objects are using the same EF data contexts and I do not have to worry about objects being used in more than one data context (which of course is not allowed and causes an exception). As of right now I am trying to decide how to generate my user and authorization service layers, and have run against a design roadblock. The User/Authization service will be a central class that handles the logic for logging in, changing user details, managing roles and determining what users have access to what. The problem is, using the current methodology the asp.net mvc controllers will initialize it's own EF repository factory via Windsor and the asp.net membership/role provider will have to initialize it's own EF repository factory. This means that each part of the site will then have it's own data context. This seems to mean that if asp.net authenticates a user, that user's object will be in the membership provider's data context and thus if I try to retrieve that user object in the service layer (say to change the user's name) I will get a duplication exception. I thought of making the repository factory class a singleton, but I don't see a way for that to work with castle Windsor. How do other people handle asp.net custom providers in a MVC (or any n-tier) architecture without having object duplication issues?

    Read the article

  • Linq to SQL Repository ~theory~ - Generic but now uses Linq to Objects?

    - by Matt Tolliday
    The project I am currently working on used Linq to SQL as an ORM data access technology. Its an MVC3 Web app. The problem I faced was primarily due to the inability to mock (for testing) the DataContext which gets autogenerated by the DBML designer. So to solve this issue (after much reading) I refactored the repository system which was in place - single repository with seperate and duplicated access methods for each table which ended up with something like 300 methods only 10 of which were unique - into a single repository with generic methods taking the table and returning more generic types to the upper reaches of the application. My question revolves more around the design I've used to get thus far and the differences I'm noticing in the structure of the app. 1) Having refactored the code from the dark ages which used classic Linq to SQL queries: public Billing GetBilling(int id) { var result = ( from bil in _bicDc.Billings where bil.BillingId == id select bil).SingleOrDefault(); return (result); } it now looks like: public T GetRecordWhere<T>(Expression<Func<T, bool>> predicate) where T : class { T result; try { result = _dataContext.GetTable<T>().Where(predicate).SingleOrDefault(); } catch (Exception ex) { throw ex; } return result; } and is used by the controller with a query along the lines of: _repository.GetRecordWhere<Billing>(x => x.BillingId == 1); which is fine, and precisely what I wanted to achieve. ...however.... I'm also having to do the following to get precisely the result set i require in the controller class (the highest point of the app in essence)... viewModel.RecentRequests = _model.GetAllRecordsWhere<Billing>(x => x.BillingId == 1) .Where(x => x.BillingId == Convert.ToInt32(BillingType.Submitted)) .OrderByDescending(x => x.DateCreated). Take(5).ToList(); This - as far as my understanding is correct - is now using Linq to Objects rather than the Linq to SQL queries I was previously? Is this okay practise? It feels wrong to me but I dont know why. Probably because the logic of the queries is in the very highest tier of the app, rather than the lowest, but... I defer to you good people for advice. One of the issues I considered was bringing the entire table into memory but I understand that using the Iqeryable return type the where clause is taken to the database and evaluated there. Thus returning only the resultset i require... i may be wrong. And if you've made it this far, well done. Thank you, and if you have any advice it is very much appreciated!!

    Read the article

  • Show/hide jQuery

    - by Banderdash
    Not much experience with JavaScript, hopefully one of you gurus can help. <div id="themes"> <h2>Research Themes</h2> <ul> <li><a href="">Learn about our approach to the <strong>environment</strong></a><span><a href="#">Expand</a></span></li> <ul class="tier_2 hide"> <li><a href="">Project name numero uno goes here</a></li> <li><a href="">Project name numero dos goes here</a></li> <li><a href="">Project name numero tres goes here</a></li> </ul> <li><a href="">Learn about our approach to <strong>human health</strong></a><span><a href="#">Expand</a></span></li> <ul class="tier_2 hide"> <li><a href="">Project name numero uno goes here</a></li> <li><a href="">Project name numero dos goes here</a></li> <li><a href="">Project name numero tres goes here</a></li> </ul> <li class="last"><a href="">Learn about our approach to <strong>national defense</strong></a><span><a href="#">Expand</a></span></li> <ul class="tier_2 hide"> <li><a href="">Project name numero uno goes here</a></li> <li><a href="">Project name numero dos goes here</a></li> <li><a href="">Project name numero tres goes here</a></li> </ul> </ul> </div><!-- // end themes --> This is my markup. As you can see under each of the first tier of li's there are ul's with classes of tier_2 and hide. I've been trying to create some simple jQuery that on click will remove the hide class from it's child ul, but at the same time check that no other ul's with class of tier_2 are shown (aka the other's have the hide class). This should keep a visitor from expanding so many items at once that it will make the layout look funky. Just not sure how to accomplish this, any ideas?

    Read the article

  • What classes should I map against with NHibernate?

    - by apollodude217
    Currently, we use NHibernate to map business objects to database tables. Said business objects enforce business rules: The set accessors will throw an exception on the spot if the contract for that property is violated. Also, the properties enforce relationships with other objects (sometimes bidirectional!). Well, whenever NHibernate loads an object from the database (e.g. when ISession.Get(id) is called), the set accessors of the mapped properties are used to put the data into the object. What's good is that the middle tier of the application enforces business logic. What's bad is that the database does not. Sometimes crap finds its way into the database. If crap is loaded into the application, it bails (throws an exception). Sometimes it clearly should bail because it cannot do anything, but what if it can continue working? E.g., an admin tool that gathers real-time reports runs a high risk of failing unnecessarily instead of allowing an admin to even fix a (potential) problem. I don't have an example on me right now, but in some instances, letting NHibernate use the "front door" properties that also enforce relationships (especially bidi) leads to bugs. What are the best solutions? Currently, I will, on a per-property basis, create a "back door" just for NHibernate: public virtual int Blah {get {return _Blah;} set {/*enforces BR's*/}} protected virtual int _Blah {get {return blah;} set {blah = value;}} private int blah; I showed the above in C# 2 (no default properties) to demonstrate how this gets us basically 3 layers of, or views, to blah!!! While this certainly works, it does not seem ideal as it requires the BL to provide one (public) interface for the app-at-large, and another (protected) interface for the data access layer. There is an additional problem: To my knowledge, NHibernate does not give you a way to distinguish between the name of the property in the BL and the name of the property in the entity model (i.e. the name you use when you query, e.g. via HQL--whenever you give NHibernate the name (string) of a property). This becomes a problem when, at first, the BR's for some property Blah are no problem, so you refer to it in your O/R mapping... but then later, you have to add some BR's that do become a problem, so then you have to change your O/R mapping to use a new _Blah property, which breaks all existing queries using "Blah" (common problem with programming against strings). Has anyone solved these problems?!

    Read the article

  • How to design service that can provide interface as JAX-WS web service, or via JMS, or as local meth

    - by kevinegham
    Using a typical JEE framework, how do I develop and deploy a service that can be called as a web service (with a WSDL interface), be invoked via JMS messages, or called directly from another service in the same container? Here's some more context: Currently I am responsible for a service (let's call it Service X) with the following properties: Interface definition is a human readable document kept up-to-date manually. Accepts HTTP form-encoded requests to a single URL. Sends plain old XML responses (no schema). Uses Apache to accept requests + a proprietary application server (not servlet or EJB based) containing all logic which runs in a seperate tier. Makes heavy use of a relational database. Called both by internal applications written in a variety of languages and also by a small number of third-parties. I want to (or at least, have been told to!): Switch to a well-known (pref. open source) JEE stack such as JBoss, Glassfish, etc. Split Service X into Service A and Service B so that we can take Service B down for maintenance without affecting Service A. Note that Service B will depend on (i.e. need to make requests to) Service A. Make both services easier for third parties to integrate with by providing at least a WS-I style interface (WSDL + SOAP + XML + HTTP) and probably a JMS interface too. In future we might consider a more lightweight API too (REST + JSON? Google Protocol Buffers?) but that's a nice to have. Additional consideration are: On a smaller deployment, Service A and Service B will likely to running on the same machine and it would seem rather silly for them to use HTTP or a message bus to communicate; better if they could run in the same container and make method calls to each other. Backwards compatibility with the existing ad-hoc Service X interface is not required, and we're not planning on re-using too much of the existing code for the new services. I'm happy with either contract-first (WSDL I guess) or (annotated) code-first development. Apologies if my terminology is a bit hazy - I'm pretty experienced with Java and web programming in general, but am finding it quite hard to get up to speed with all this enterprise / SOA stuff - it seems I have a lot to learn! I'm also not very used to using a framework rather than simply writing code that calls some packages to do things. I've got as far as downloading Glassfish, knocking up a simple WSDL file and using wsimport + a little dummy code to turn that into a WAR file which I've deployed.

    Read the article

  • Windows Azure Learning Plan - Security

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with Security for  Windows Azure.   General Security Information Overview and general  information about Windows Azure Security - what it is, how it works, and where you can learn more. General Security Whitepaper – answers most questions http://blogs.msdn.com/b/usisvde/archive/2010/08/10/security-white-paper-on-windows-azure-answers-many-faq.aspx Windows Azure Security Notes from the Patterns and Practices site http://blogs.msdn.com/b/jmeier/archive/2010/08/03/now-available-azure-security-notes-pdf.aspx Overview of Azure Security http://www.windowsecurity.com/articles/Microsoft-Azure-Security-Cloud.html Azure Security Resources http://reddevnews.com/articles/2010/08/19/microsoft-releases-windows-azure-security-resources.aspx Cloud Computing Security Considerations http://www.microsoft.com/downloads/en/details.aspx?FamilyID=68fedf9c-1c27-4642-aa5b-0a34472303ea&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDownloadCenter+%28Microsoft+Download+Center Security in Cloud Computing – a Microsoft Perspective http://www.microsoft.com/downloads/en/details.aspx?FamilyID=7c8507e8-50ca-4693-aa5a-34b7c24f4579&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDownloadCenter+%28Microsoft+Download+Center Physical Security for Microsoft’s Online Computing Information on the Infrastructure and Locations for Azure Physical Security. The Global Foundation Services Group at Microsoft handles physical security http://www.globalfoundationservices.com/security/index.html Microsoft’s Security Response Center http://www.microsoft.com/security/msrc/ Software Security for Microsoft’s Online Computing Steps we take as a company to develop secure software Windows Azure is developed using the Trustworthy Computing Initiative http://www.microsoft.com/about/twc/en/us/default.aspx and  http://msdn.microsoft.com/en-us/library/ms995349.aspx Identity and Access in the Cloud http://blogs.msdn.com/b/technology_titbits_by_rajesh_makhija/archive/2010/10/29/identity-and-access-in-the-cloud.aspx Security Steps you should take While Microsoft takes great pains to secure the infrastructure, platform and code for Windows Azure, you have a responsibility to write secure code. These pointers can help you do that. Securing your cloud architecture, step-by-step http://technet.microsoft.com/en-us/magazine/gg296364.aspx Security Guidelines for Windows Azure http://redmondmag.com/articles/2010/06/15/microsoft-issues-security-guidelines-for-windows-azure.aspx  Best Practices for Windows Azure Security http://blogs.msdn.com/b/vbertocci/archive/2010/06/14/security-best-practices-for-developing-windows-azure-applications.aspx Active Directory and Windows Azure http://blogs.msdn.com/b/plankytronixx/archive/2010/10/22/projecting-your-active-directory-identity-to-the-azure-cloud.aspx Understanding Encryption (great overview and tutorial) http://blogs.msdn.com/b/plankytronixx/archive/2010/10/23/crypto-primer-understanding-encryption-public-private-key-signatures-and-certificates.aspx Securing your Connection Strings (SQL Azure) http://blogs.msdn.com/b/sqlazure/archive/2010/09/07/10058942.aspx Getting started with Windows Identity Foundation (WIF) quickly http://blogs.msdn.com/b/alikl/archive/2010/10/26/windows-identity-foundation-wif-fast-track.aspx

    Read the article

  • WPF vs. WinForms - a Delphi programmer's perspective?

    - by Robert Oschler
    I have read most of the major threads on WPF vs. WinForms and I find myself stuck in the unfortunate ambivalence you can fall into when deciding between the tried and true previous tech (Winforms), and it's successor (WPF). I am a veteran Delphi programmer of many years that is finally making the jump to C#. My fellow Delphi programmers out there will understand that I am excited to know that Anders Hejlsberg, of Delphi fame, was the architect behind C#. I have a strong addiction to Delphi's VCL custom components, especially those involved in making multi-step Wizards and components that act as a container for child components. With that background, I am hoping that those of you that switched from Delphi to C# can help me with my WinForms vs. WPF decision for writing my initial applications. Note, I am very impatient when coding and things like full fledged auto-complete and proper debugger support can make or break a project for me, including being able to find readily available information on API features and calls and even more so, workarounds for bugs. The SO threads and comments in the early 2009 date range give me great concern over WPF when it comes to potential frustrations that could mar my C# UI development coding. On the other hand, spending an inordinate amount of time learning an API tech that is, even if it is not abandoned, soon to be replaced (WinForms), is equally troubling and I do find the GPU support in WPF tantalizing. Hence my ambivalence. Since I haven't learned either tech yet I have a rare opportunity to get a fresh start and not have to face the big "unlearning" curve I've seen people mention in various threads when a WinForms programmer makes the move to WPF. On the other hand, if using WPF will just be too frustrating or have other major negative consequences for an impatient RAD developer like myself, then I'll just stick with WinForms until WPF reaches the same level of support and ease of use. To give you a concrete example into my psychology as a programmer, I used VB and subsequently Delphi to completely avoid altogether the very real pain of coding with MFC, a Windows UI library that many developers suffered through while developing early Windows apps. I have never regretted my luck in avoiding MFC. It would also be comforting to know if Anders Hejlsberg had a hand in the architecture of WPF and/or WinForms, and if there are any disparities in the creative vision and ease of use embodied in either code base. Finally, for the Delphi programmers again, let me know how much "IDE schock" I'm in for when using WPF as opposed to WinForms, especially when it comes to debugger support. Any job market comments updated for 2011 would be appreciated too. -- roschler

    Read the article

  • WPF vs. WinForms - a Delphi programmer's perspective?

    - by Robert Oschler
    Hello all. I have read most of the major threads on WPF vs. WinForms and I find myself stuck in the unfortunate ambivalence you can fall into when deciding between the tried and true previous tech (Winforms), and it's successor (WPF). I am a veteran Delphi programmer of many years that is finally making the jump to C#. My fellow Delphi programmers out there will understand that I am excited to know that Anders Hejlsberg, of Delphi fame, was the architect behind C#. I have a strong addiction to Delphi's VCL custom components, especially those involved in making multi-step Wizards and components that act as a container for child components. With that background, I am hoping that those of you that switched from Delphi to C# can help me with my WinForms vs. WPF decision for writing my initial applications. Note, I am very impatient when coding and things like full fledged auto-complete and proper debugger support can make or break a project for me, including being able to find readily available information on API features and calls and even more so, workarounds for bugs. The SO threads and comments in the early 2009 date range give me great concern over WPF when it comes to potential frustrations that could mar my C# UI development coding. On the other hand, spending an inordinate amount of time learning an API tech that is, even if it is not abandoned, soon to be replaced (WinForms), is equally troubling and I do find the GPU support in WPF tantalizing. Hence my ambivalence. Since I haven't learned either tech yet I have a rare opportunity to get a fresh start and not have to face the big "unlearning" curve I've seen people mention in various threads when a WinForms programmer makes the move to WPF. On the other hand, if using WPF will just be too frustrating or have other major negative consequences for an impatient RAD developer like myself, then I'll just stick with WinForms until WPF reaches the same level of support and ease of use. To give you a concrete example into my psychology as a programmer, I used VB and subsequently Delphi to completely avoid altogether the very real pain of coding with MFC, a Windows UI library that many developers suffered through while developing early Windows apps. I have never regretted my luck in avoiding MFC. It would also be comforting to know if Anders Hejlsberg had a hand in the architecture of WPF and/or WinForms, and if there are any disparities in the creative vision and ease of use embodied in either code base. Finally, for the Delphi programmers again, let me know how much "IDE schock" I'm in for when using WPF as opposed to WinForms, especially when it comes to debugger support. Any job market comments updated for 2011 would be appreciated too. -- roschler

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • Chicago SQL Saturday

    - by Johnm
    This past Saturday, April 17, 2010, I journeyed North to the great city of Chicago for some SQL Server fun, learning and fellowship. The Chicago edition of this grassroots phenomenon was the 31st scheduled SQL Saturday since the program's birth in late 2007. The Chicago SQL Saturday consisted of four tracks with eight sessions each and was a very energetic and fast paced day for the 300+/- SQL Server enthusiasts in attendance. The speaker line up included national notables such as Kevin Kline, Brent Ozar, and Brad McGehee. My hometown of Indianapolis was well represented in the speaker line up with Arie Jones, Aaron King and Derek Comingore. The day began with a very humorous keynote by Kevin Kline and Brent Ozar who emphasized the importance of community events such as SQL Saturday and the monthly user group meetings. They also brilliantly included the impact that getting involved in the SQL community through social media can have on your professional career. My approach to the day was to try to experience as much of the event as I could, so there were very few sessions that I attended for their full duration. I leaped from session to session like a bumble bee, gleaning bits of nectar from each session. Amid these leaps I took the opportunity to briefly chat with some of the in-the-queue speakers as well as other attendees that wondered the hallways. I especially enjoyed a great discussion with Devin Knight about his plans regarding the upcoming Jacksonville SQL Saturday as well as an interesting SQL interpretation of the Iron Chef, which I think would catch on like wild-fire. There were two sessions that stood out as exceptional. So much so that I could not pull myself away: Kevin Kline presented on "SQL Server Internals and Architecture". This session could have been classified as one that is intended for the beginner. Kevin even personally warned me of such as I entered the room. I am a believer in revisiting the basics regardless of the level of your mastery, so I entered into this session in that spirit. It was a very clear and precise presentation. Masterfully illustrated and demonstrated. Brad McGehee presented on "How and When to Use Indexed Views". This was a topic that I was recently exploring and was considering to for use in an integration project. Brad effectively communicated the complexity of this feature and what is involved to gain their full benefit. It was clear at the conclusion of this session that it was not the right feature for my specific needs. Overall, the event was a great success. The use of volunteers, from an attendee's perspective was masterful. The only recommendation that I would have for the next Chicago SQL Saturday would be to include more time in between sessions to permit some level of networking among the attendees, one-on-one questions for speakers and visits to the sponsor booths. Congratulations to Wendy Pastrick, Ted Krueger, and Aaron Lowe for their efforts and a very successful SQL Saturday!

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >