Search Results

Search found 16948 results on 678 pages for 'static analysis'.

Page 397/678 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Sharing an internet connection through the Ethernet port

    - by Bob Cunningham
    I have a small living room PC (Bohica, running fully-updated Ubuntu 10.10/Maverick) connected to my HDTV that I use for web browsing and media streaming. It connects via WiFi (wlan0) to my Fedora server (Snafu) that in turn connects to the internet. I use static addressing, and everything has been working fine. I just got a Blu-ray player, and I'd like to give it wired network access to the internet via Bohica's available wired ethernet port (eth0). So far, I haven't been to get eth0 and the network configured to get the Blu-ray player talking to the internet. Here's my wlan0 configuration: ip4 addr: 192.168.0.100 mask: /24 (255.255.255.0) gateway: 192.168.0.4 (fedora box) The Blu-ray player is set to an IP of 192.168.0.98/24, with the same gateway as above. I want eth0 set to an IP of 192.168.0.99/24, but when I do this using nm-connection-editor I lose internet access (the system tries to use eth0 as the default internet access interface). How do I get my blu-ray player to talk to the internet through Bohica, and do so without disrupting my current (working) network? Thanks. Edit: Here's the relevant output from nm-tool with the Blu-ray player connected: $ nm-tool NetworkManager Tool State: connected - Device: eth0 Type: Wired Driver: forcedeth State: disconnected Default: no HW Address: 90:FB:A6:2C:94:32 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on - Device: wlan0 [wlan0] Type: 802.11 WiFi Driver: ndiswrapper State: connected Default: yes HW Address: 00:26:5A:C0:D0:05 IPv4 Settings: Address: 192.168.0.100 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.4

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Lost contact with my NAS after changing its IP

    - by Beles
    I did some brain-dead reconfiguring of my D-Link DNS-323 NAS some days ago. I have a home network where each computer gets a dynamically allocated IP address starting at 192.168.1.100. The irritating point (for me at least) was that the NAS changed IP if the power went down or I turned off the router. I then had to remap a drive-letter to point to the new IP address of the NAS. To remedy that I configured the NAS to have a static IP, 192.168.0.10. I had no good reason to choose that IP, other than I found it in a user manual for the NAS. After I changed the IP and rebooted the NAS it disappeared from the network and was never to be found again. Now I have a black brick standing in my home, looking good, but "dead". Could anyone point me in a direction which helps me solve this problem? I have about 100gb worth of pic of my children on this brick so I really want it back :-) Sincerely,

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • State Design Pattern .NET Code Sample

    using System;using System.Collections.Generic;using System.Linq;using System.Text;class Program{ static void Main(string[] args) { Person p1 = new Person("P1"); Person p2 = new Person("P2"); p1.EatFood(); p2.EatFood(); p1.Vomit(); p2.Vomit(); }}interface StomachState{ void Eat(Person p); void Vomit(Person p);}class StomachFull : StomachState{ public void Eat(Person p) { Console.WriteLine("Can't eat more."); } public void Vomit(Person p) { Console.WriteLine("I've just Vomited."); p.StomachState = new StomachEmpty(); }}class StomachEmpty : StomachState{ public void Eat(Person p) { Console.WriteLine("I've just had food."); p.StomachState = new StomachFull(); } public void Vomit(Person p) { Console.WriteLine("Nothing to Vomit."); }}class Person{ private StomachState stomachState; private String personName; public Person(String personName) { this.personName = personName; StomachState = new StomachEmpty(); } public StomachState StomachState { get { return stomachState; } set { stomachState = value; Console.WriteLine(personName + " Stomach State Changed to " + StomachState.GetType().Name); Console.WriteLine("***********************************************\n"); } } public Person(StomachState StomachState) { this.StomachState = StomachState; } public void EatFood() { StomachState.Eat(this); } public void Vomit() { StomachState.Vomit(this); }} span.fullpost {display:none;}

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • In Nginx, can I handle both a location:url or a content-type: text/html response from memcached?

    - by Sean Foo
    I'm setting up an nginx - apache reverse proxy where nginx handles the static files and apache the dynamic. I have a search engine and depending on search parameter I either directly forward the user to the page they are looking for or provide a set of search results. I cache these results in memcached as key:/search.cgi?q=foo value: LOCATION:http://www.example.com/foo.html and key:/search.cgi?q=bar value: CONTENT-TYPE: text/html <html> .... .... </html> I can pull the "Content-type...." values out of memcached using nginx and send them to the user, but I can't quite figure out how to handle a returned value like "Location..." Can I?

    Read the article

  • Webcor Builders Coordinates Construction Schedules and Mitigates Potential Delays More Efficiently with Integrated Project Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} With more than 40 years of commercial construction experience, Webcor Builders is a leading builder of distinguished, high-profile projects, including high-rise condominiums and hotels, laboratories, healthcare centers, and public works projects. Webcor is also known for its award-winning concrete, interior construction, historic restoration, and seismic renovation work. The company has completed more than 50 million square feet of projects to date. Considering the variety and complexity of the construction projects Webcor undertakes, an integrated project management solution is critical to ensuring optimal efficiency and completing client projects on time and on budget. The company previously used a number of scheduling systems for its various building projects. These packages provided different levels of schedule detail and required schedulers, engineers, and other employees to learn multiple systems. From an IT cost and complexity perspective, the company had to manage multiple scheduling systems and pay for multiple sets of licenses. The company looked to standardize on an enterprise project management system, and selected Oracle’s Primavera P6 Enterprise Project Portfolio Management. Webcor uses the solution’s advanced capabilities to schedule complex projects, analyze delays, model and propose multiple scenarios to demonstrate and mitigate delays and cost overruns, and process that information efficiently to deliver the scheduling precision that public and private projects require. In fact, the solution was instrumental in helping the company’s expansion into public sector projects during the recent economic downturn, and with Primavera P6 in place, it can deliver the precise schedule reporting required for large public projects. With Primavera P6 in place, the company could deliver the precise scheduling and milestone reporting capabilities required for large public projects. The solution is in managing the high-profile University of California – Berkeley Memorial Stadium project. Webcor was hired as construction manager and general contractor for the stadium renovation project, which is a fast-paced project located near the seismically active Hayward Fault Zone. Due to the University of California’s football schedule, meeting the Universities deadline for the coming season placed Webcor in a situation where risk awareness and early warnings of issues would be paramount. Webcor and the extended project team needed a solution that could instantly analyze alternate scenarios to mitigate potential delays; Primavera would deliver those answers.The team would also need to enable multiple stakeholders to use an internet-based platform to access the schedule from various locations, and model complicated sequencing requirements where swift decisions would be made to keep the project on track. The schedule is an integral part of Webcor’s construction management process for the stadium project. Rather than providing the client with the industry-standard monthly update, Webcor updates the critical path method (CPM) schedule on a weekly basis. The project team also reviews the schedule and updates weekly to confirm that progress and forecasted performance are accurate. Hired by the University for their ability to deliver in high risk environments The Webcor team was hit recently with a design supplement that could have added up to 70 days to the project. Using Oracle Primavera P6 the team sprung into action analyzing multiple “what if” scenarios to review mitigation means and methods.  Determined to make sure the Bears could take the field in the coming season the project team nearly eliminated the impact with their creative analysis in working the schedule. The total time from the issuance of the final design supplement to an agreed mitigation response was less than one week; leveraging the Oracle Primavera solution Webcor was able to deliver superior customer value With the ability to efficiently manage projects and schedules, Webcor can ensure it completes its projects on time and on budget, as well as inform clients about what changes to plans will mean in terms of delays and additional costs. Read the complete customer case study at :  http://www.oracle.com/us/corporate/customers/customersearch/webcor-builders-1-primavera-ss-1639886.html

    Read the article

  • IPv6 6to4 on Windows Server

    - by Graham Wager
    I'm looking for a relatively simple guide to setting up IPv6 properly on a home network. This network currently has a server (Windows Server 2008R2) running RRAS that establishes connectivity to the internet using a demand-dial PPPoE connection and handles the NAT. It also hosts a DNS server and DHCP. My ISP does not support IPv6, but I have a static IPv4 address. I've read about 6to4 and signed up at tunnelbroker.net, but quickly felt out of my depth. How do I configure my network to use it, and how I should configure my DHCP server with regards to IPv6 addresses?

    Read the article

  • Access functions from user control without events?

    - by BornToCode
    I have an application made with usercontrols and a function on main form that removes the previous user controls and shows the desired usercontrol centered and tweaked: public void DisplayControl(UserControl uControl) I find it much easier to make this function static or access this function by reference from the user control, like this: MainForm mainform_functions = (MainForm)Parent; mainform_functions.DisplayControl(uc_a); You probably think it's a sin to access a function in mainform, from the usercontrol, however, raising an event seems much more complex in such case - I'll give a simple example - let's say I raise an event from usercontrol_A to show usercontrol_B on mainform, so I write this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); }; Now what if I want usercontrol_B to also have an event to show usercontrol_C? now it would look like this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); uc_b.show_uc_c += (s2,e2) => {usercontrol_C uc_c = new usercontrol_C(); DisplayControl(uc_c);} }; THIS LOOKS AWFUL! The code is much simpler and readable when you actually access the function from the usercontrol itself, therefore I came to the conclusion that in such case it's not so terrible if I break the rules and not use events for such general function, I also think that a readable usercontrol that you need to make small adjustments for another app is preferable than a 100% 'generic' one which makes my code look like a pile of mud. What is your opinion? Am I mistaken?

    Read the article

  • How to do dependency Injection and conditional object creation based on type?

    - by Pradeep
    I have a service endpoint initialized using DI. It is of the following style. This end point is used across the app. public class CustomerService : ICustomerService { private IValidationService ValidationService { get; set; } private ICustomerRepository Repository { get; set; } public CustomerService(IValidationService validationService,ICustomerRepository repository) { ValidationService = validationService; Repository = repository; } public void Save(CustomerDTO customer) { if (ValidationService.Valid(customer)) Repository.Save(customer); } Now, With the changing requirements, there are going to be different types of customers (Legacy/Regular). The requirement is based on the type of the customer I have to validate and persist the customer in a different way (e.g. if Legacy customer persist to LegacyRepository). The wrong way to do this will be to break DI and do somthing like public void Save(CustomerDTO customer) { if(customer.Type == CustomerTypes.Legacy) { if (LegacyValidationService.Valid(customer)) LegacyRepository.Save(customer); } else { if (ValidationService.Valid(customer)) Repository.Save(customer); } } My options to me seems like DI all possible IValidationService and ICustomerRepository and switch based on type, which seems wrong. The other is to change the service signature to Save(IValidationService validation, ICustomerRepository repository, CustomerDTO customer) which is an invasive change. Break DI. Use the Strategy pattern approach for each type and do something like: validation= CustomerValidationServiceFactory.GetStratedgy(customer.Type); validation.Valid(customer) but now I have a static method which needs to know how to initialize different services. I am sure this is a very common problem, What is the right way to solve this without changing service signatures or breaking DI?

    Read the article

  • LinkSys WRT54GL + AM200 in half-bridge mode - Setup guide recommendations?

    - by Peter Mounce
    I am basically looking for a good guide on how to set up my home network with this set of hardware. I need: Dynamic DNS Firewall + port-forwarding VPN Wake-on-LAN from outside firewall VOIP would be nice QoS would be nice (make torrents take lower priority to other services when those other services are happening) DHCP Wireless + WPA2 security Ability to play multiplayer computer games I am not a networking or computing neophyte, but the last time I messed with network gear was a few years ago, so am needing to dust off knowledge I kinda half have. I have read that I should be wanting to set up the AM200 in half-bridge mode, so that the WRT54GL gets the WAN IP - this sounds like a good idea, but I'd still like to be advised. I have read that the dd-wrt firmware will meet my needs (though I gather I'll need the vpn-specific build, which appears to preclude supporting VOIP), but I'm not wedded to using it. My ISP supplies me with: a block of 8 static IPs, of which 5 are usable to me a PPPoA ADSL2+ connection

    Read the article

  • How to connect to a wireless network that has a two word name with a space?

    - by grinan
    Ok, I have searched for some time in earnest for an answer to this question. I have a Beagleboard which has Ubuntu 10.10 Minimal install for Arm running on it. With the default install, minimal tools, no GUI, I am unable to connect to my wireless network. The name of my network is "MYNAME NETWORK". Using a text editor to edit /etc/network/interfaces I can not seem to connect at all. As an experiment, I connected to a friends network, which has a one word name "dystek",and was able to connect with Zero issues, and update and install a full GUI for ubuntu Arm. The problem is that I don't want a full blown gui on the beagleboard, just a minimal install of ubuntu with CLI is all I need or want. Is there anyway to connect to my wireless network via editing the /etc/network/interfaces file. Surely there is... I just don't know how. Right now my interfaces file looks like this: auto lo iface lo inet loopback auto wlan0 iface wlan0 inet static address 192.168.1.200 netmask 255.255.255.0 gateway 192.168.1.1 wireless-essid BARRETT NETWORK wireless-key 46456xxxxxxxx Any help would be appreciated.

    Read the article

  • slick2d missiles

    - by kirchhoff
    Hey I'm making a game in java with slick2d and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advises: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } bullet++; } I created the method "resetLocation" in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • Mercurial mirror: abort: No such file or directory: http://[...]/00manifest.i

    - by Sridhar Ratnakumar
    I am trying to setup a daily mirror of a mercurial repository - code.python.org in particular - within our local network, and serve that via Apache HTTPD. On the remote host that hosts apache, I did this: $ cd /var/www $ hg clone http://code.python.org/hg/trunk/ On my macbook, I ran: $ hg -v clone http://remote/trunk/ (falling back to static-http) abort: No such file or directory: http://remote/trunk/.hg/store/00manifest.i Google does not show any relevant result for this particular error. I remember back in those days being able to setup Bazaar mirrors by a simple clone. Doesn't Mercurial work like that? How do I setup a mirror that must further act like a clone URL?

    Read the article

  • What is the safest and least expensive way to store 10 terabytes of data?

    - by Josh T
    I'm a member of a production company and we're preparing for our first feature film. We've been discussing methods of data storage to keep all of our original content safe (for as long as possible). While we understand data is never 100% safe, we'd like to find the safest solution for us. We've considered: 16TB NAS for on-site storage 4-5 2TB hard drives (cheap, but not redundant), copy original footage to drives then seal in static free bag Burn data to Blu-Ray disks (time consuming and expensive: 200 disks == $5000) Tape drive(s)? I know the least about tape drives, except the fact that they're more reliable than disks. Any experience/knowledge with this amount of data is hugely appreciated.

    Read the article

  • Linksys WRT54GS V6 Router Blinking Power Light

    - by Frank
    I have a Linksys WRT54GS V6 Router in my possession got it at my local goodwill for 5$. Upon start up the Power LED starts flashing like crazy and at the same time the Ethernet ports all light up once then turn off (DMZ and WLAN never turn on). I can ping the router only by setting a static IP on my Pc. I can also successfully push a file (official Linksys OS and DD-wrt) into it via TFTP but this currently does nothing (no 192.1681.1 Access). Any ideas as to what may be wrong? I think its pretty obvious that it's bricked but.. I keep hearing a whole lot of "if it pings it's fixable" on the internet.

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • Encrypt remote linux server

    - by Margaret Thorpe
    One of my customers has requested that their web server is encrypted to prevent offline attacks to highly sensitive data contained in a mysql database and also /var/log. I have full root access to the dedicated server at a popular host. I am considering 3 options - FDE - This would be ideal, but with only remote access (no console) I imagine this would be very complex. Xen - installing XEN and moving their server within a XEN virtual machine and encrypting the VM - which seems easier to do remotely. Parition - encrypt the non-static partitions where the sensitive data resides e.g. /var /home etc. What would be the simplist approach that satisfies the requirements?

    Read the article

  • Best solution for getting referral information in PHP

    - by absentx
    I am currently redoing some link structuring on a website. In the past we have used specific php files on the last step to direct the user to the proper place. Example: www.mysite.com/action/go-to-blue.php or www.mysite.com/action/short/go-to-red.php www.mysite.com/action/tall/go-to-red.php We are now restructuring to eliminate the /short/ or /tall/ directory. What this means is now "go-to-blue.php" will be doing some extra processing to make sure it sends the visitor to the proper place. The static method of the past was quite effective, because, well, if they left from that page we knew we had it right. Now since we are 301 redirecting action/short/go-to-red.php to just action/go-to-red.php it is quite important on "go-to-red.php" that we realize a user may have been redirected from /short/ or /tall/. So right now I am using HTTP_REFERRER and of course in my testing that works fine, but after a lot of reading it is clear that this is not a solid solution, so I was starting to brainstorm on other ways to check and make sure we get the proper referral information. If we could check HTTP_REFERRER plus some other test, I would feel confident we have a pretty good system in place to send the visitor to the right place. Some questions/comments: Could I use a session variable or a cookie to accomplish this goal? If so, would that be maintained through the 301 redirect? I don't see why it wouldn't be.. Passing the url in the url is not an option in this case.

    Read the article

  • Grant relay to servers based on AD security group membership

    - by john
    We're moving our relay from an Exchange 2003 server to an Exchange 2010 server. I was hoping the "Grant or deny relay permissions to specific users or groups" option would still be available in some form, but I can't find out how to do it. I've read up on recieve connectors and so far I can't get it to work. I have edited the security on the Recieve Connector to allow the following extended rights to the group and added computer accounts to that group: Accept Routing Headers Bypass Anti-spam Submit to Server Accept any Sender Accept any Recipient Then I suddenly realised while testing... How would the receive connector resolve the permission to a particular AD object, maybe a reverse DNS lookup? What I'd like to know is if what I'm trying to achieve is possible, and how it would be possible. I would rather not revert to an IP-based list as this is not as manageable, and I'm trying to avoid creating static IPs/reservations for a number of workstations that would otherwise not need them.

    Read the article

  • Running a home mail server using dynamic dns [closed]

    - by Anand
    Hi, Is it possible to run an email server on my home box using dynamic dns? The scenario is, I want to auto cc all incoming and outgoing emails from my one account to another, from some server side config instead of configuring email clients for rules. I have tried Google Apps Mail but it doesn't allow auto cc of outgoing emails. After having read tons of blogs, forum messages etc (hope I have been reading the correct info :) ) the only option to achieve what I am needing is to setup my own mail server, but the cost of getting a static IP doesn't fit my budget. Please can someone point me in the correct direction. Platform doesn't matter, I can setup a Windows or Linux server. Many Thanks

    Read the article

  • Security measures for CentOS

    - by cappuccinodrinker
    I have been tightening up my web server security and wanted to know what else I can do. I am running CentOS 5 with these measures: - All passwords to FTP, MySQL etc are generated from grc.com/passwords.htm and microsoft.com/protect/fraud/passwords/create.aspx (for the ones which cannot be too long). - Running iptables with all ports shut off except for http mail and smtp, the important ports like FTP SSH are blocked to all except my static office IP. There is also no response to pings. - Rootkit Hunter running daily - The server is PCI compliant according to Comodo - Not running any crappy made php apps, we use Zend Framework for our stuff and do have kayako installed and keep them up to date. Can't really think of anything else I can do... I could implement a brute force measure, but I think I already have by simply changing my SSH port to a number above 10000 and blocking it off with iptables.

    Read the article

  • Disabling networkmanager for a specific interface

    - by bdonlan
    I'd like to do some experimentation with hostap without disabling my primary wireless interface. How do I tell networkmanager to keep its hands off a specific interface or interfaces while allowing it to continue managing all other interfaces normally? I'm using Ubuntu 9.04. (Wasn't sure if this should go on superuser or serverfault, as networkmanager isn't much of a 'server' tool - if it belongs on serverfault please feel free to move it) Edit: I've tried adding this to /etc/network/interfaces: allow-hotplug wlan2 iface wlan2 inet static address 192.168.49.1 netmask 255.255.255.0 But this has no apparent effect, even after restarting NetworkManager. Here's my /etc/NetworkManager/nm-system-settings.conf: [main] plugins=ifupdown,keyfile [ifupdown] managed=false Edit[2]: Looks like I needed to restart nm-system-settings, then NetworkManager.

    Read the article

  • Is there ever a reason to do all an object's work in a constructor?

    - by Kane
    Let me preface this by saying this is not my code nor my coworkers' code. Years ago when our company was smaller, we had some projects we needed done that we did not have the capacity for, so they were outsourced. Now, I have nothing against outsourcing or contractors in general, but the codebase they produced is a mass of WTFs. That being said, it does (mostly) work, so I suppose it's in the top 10% of outsourced projects I've seen. As our company has grown, we've tried to take more of our development in house. This particular project landed in my lap so I've been going over it, cleaning it up, adding tests, etc etc. There's one pattern I see repeated a lot and it seems so mindblowingly awful that I wondered if maybe there is a reason and I just don't see it. The pattern is an object with no public methods or members, just a public constructor that does all the work of the object. For example, (the code is in Java, if that matters, but I hope this to be a more general question): public class Foo { private int bar; private String baz; public Foo(File f) { execute(f); } private void execute(File f) { // FTP the file to some hardcoded location, // or parse the file and commit to the database, or whatever } } If you're wondering, this type of code is often called in the following manner: for(File f : someListOfFiles) { new Foo(f); } Now, I was taught long ago that instantiated objects in a loop is generally a bad idea, and that constructors should do a minimum of work. Looking at this code it looks like it would be better to drop the constructor and make execute a public static method. I did ask the contractor why it was done this way, and the response I got was "We can change it if you want". Which was not really helpful. Anyway, is there ever a reason to do something like this, in any programming language, or is this just another submission to the Daily WTF?

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >