Search Results

Search found 18395 results on 736 pages for 'admin interface'.

Page 234/736 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Object desing problem for simple school application

    - by Aragornx
    I want to create simple school application that provides grades,notes,presence,etc. for students,teachers and parents. I'm trying to design objects for this problem and I'm little bit confused - because I'm not very experienced in class designing. Some of my present objects are : class PersonalData() { private String name; private String surename; private Calendar dateOfBirth; [...] } class Person { private PersonalData personalData; } class User extends Person { private String login; private char[] password; } class Student extends Person { private ArrayList<Counselor> counselors = new ArrayList<>(); } class Counselor extends Person { private ArrayList<Student> children = new ArrayList<>(); } class Teacher extends Person { private ArrayList<ChoolClass> schoolClasses = new ArrayList<>(); private ArrayList<Subject> subjects = new ArrayList<>(); } This is of course a general idea. But I'm sure it's not the best way. For example I want that one person could be a Teacher and also a Parent(Counselor) and present approach makes me to have two Person objects. I want that user after successful logging in get all roles that it has (Student or Teacher or (Teacher & Parent) ). I think I should make and use some interfaces but I'm not sure how to do this right. Maybe like this: interface Role { } interface TeacherRole implements Role { void addGrade( Student student, Grade grade, [...] ); } class Teacher implements TeacherRole { private Person person; [...] } class User extends Person{ ArrayList<Role> roles = new ArrayList<>(); } Please if anyone could help me to make this right or maybe just point me to some literature/article that covers practical objects design.

    Read the article

  • Guidelines for creating referentially transparent callables

    - by max
    In some cases, I want to use referentially transparent callables while coding in Python. My goals are to help with handling concurrency, memoization, unit testing, and verification of code correctness. I want to write down clear rules for myself and other developers to follow that would ensure referential transparency. I don't mind that Python won't enforce any rules - we trust ourselves to follow them. Note that we never modify functions or methods in place (i.e., by hacking into the bytecode). Would the following make sense? A callable object c of class C will be referentially transparent if: Whenever the returned value of c(...) depends on any instance attributes, global variables, or disk files, such attributes, variables, and files must not change for the duration of the program execution; the only exception is that instance attributes may be changed during instance initialization. When c(...) is executed, no modifications to the program state occur that may affect the behavior of any object accessed through its "public interface" (as defined by us). If we don't put any restrictions on what "public interface" includes, then rule #2 becomes: When c(...) is executed, no objects are modified that are visible outside the scope of c.__call__. Note: I unsuccessfully tried to ask this question on SO, but I'm hoping it's more appropriate to this site.

    Read the article

  • Architecture Question

    - by katie77
    I am writing a rules/eligibility Module. I have 2 sets of data, one is the customer data and the other is the customer products data. Customer data to Customer products data is one to many. Now I have to go through a set of Eligibility rules for each of this Customer product data. For each customer products data, I can say the customer is eligible for that product or decline the eligibility and should move on to the next product record. So in all the rules, I need to have access to customer and customer product data(the particular record that the rules are being executed against). Since all the rules can either approve a product or decline a product, I created an interface with those 2 methods and is implementing the this interface for all the rules. I am passing the Customer data and one product data for all the rules (because rules should be executed on each row of customer product data). An Ideal situation would be having the customer and customer product data available for the rule instead of passing them to each rule. What is the best way of doing this in-terms of architecture?

    Read the article

  • Could a singleton type replace static methods and classes?

    - by MKO
    In C# Static methods has long served a purpose allowing us to call them without instantiating classes. Only in later year have we became more aware of the problems of using static methods and classes. They can’t use interfaces They can’t use inheritance They are hard to test because you can’t make mocks and stubs Is there a better way ? Obviously we need to be able to access library methods without instantiated classes all the time otherwise our code would become pretty cluttered One possibly solution is to use a new keyword for an old concept: the singleton. Singleton’s are global instances of a class, since they are instances we can use them as we would normal classes. In order to make their use nice and practical we'd need some syntactic sugar however Say that the Math class would be of type singleton instead of an actual class. The actual class containing all the default methods for the Math singleton is DefaultMath, which implements the interface IMath. The singleton would be declared as singleton Math : IMath { public Math { this = new DefaultMath(); } } If we wanted to substitute our own class for all math operations we could make a new class MyMath that inherits DefaultMath, or we could just inherit from the interface IMath and create a whole new Class. To make our class the active Math class, you'd do a simple assignment Math = new MyMath(); and voilá! the next time we call Math.Floor it will call your method. Note that for a normal singleton we'd have to write something like Math.Instance.Floor but the compiler eliminates the need for the Instance property Another idea would be to be able to define a singletons as Lazy so they get instantiated only when they're first called, like lazy singleton Math : IMath What do you think, would it have been a better solution that static methods and classes? Is there any problems with this approach?

    Read the article

  • What makes you look like a bad developer (ie a hacker) [on hold]

    - by user134583
    This comes from a lot of people about me, so I have to look at myself. So I would wonder what make one a bad developer (ie a hacker). These are a few things about me I used IDE intensively, all features, you name it: auto-completion, refactoring, quick fixes, open type, view hierarchy, API documentation, etcc When I deal with writing code for a project in domain I am not used to (I can't have fluency in this, this is new), I only have a very rough high level ideas. I don't use the standard modeling diagrams for early detail planning. Unorthodox diagrams that I invented when I need to draw the design in details. I don't use UML or similar, I find them not enough. I divide the sorts of diagram I drew into 3 types. Very high level diagrams which probably can be understood by almost anybody. Data entity diagram used for modeling data objects only (like ER diagrams and tree for inheritances and composition). Action diagrams for agents/classes and their interactions on data objects they contain. Constantly changing the interface (public methods) between interacting agents/classes if the need arises. I am more refrained when the interface and the module have matured Write initial concept code in a quick hackie way just so that the module works in the general cases so that I can play around with it. The module will be re-factored intensively after playing around so I could see more corner cases that I couldn't or (wouldn't want) anticipate before writing code. Using JUnit for integration-like test by using TestSuite class and ordering Unit test classes in the suite Using debugger almost anytime there is a problem instead of reading the code Constantly search on the internet for how to do some thing with some library that I haven't used a lot. So judgment, am I a bad developer? a hacker? Put in other words, to make sure this is not considered off-topic: - Is this bad practice to make your code too agile during incubating/prototyping phase of software development - Is it bad practice to use JUnit for integration testing, (I know there are other framework for integration testing, but those frameworks are for a specific products, not general)

    Read the article

  • Type of AI to tackle this problem?

    - by user1154277
    I posted this on stackoverflow but want to get your recommendations as well as a user on overflow recommended I post it here. I'm going to say from the beginning that I am not a programmer, I have a cursory knowledge of different types of AI and am just a businessman building a web app. Anyways, the web app I am investing in to develop is for a hobby of mine. There are many part manufacturers, product manufacturers, upgrade and addon manufacturers etc. for hardware/products in this hobby's industry. Currently, I am in the process of building a crowd sourced platform for people who are knowledgeable to go in and mark up compatibility between those parts as its not always clear cut if they are for example: Manufacturer A makes a "A" class product, and manufacturer B makes upgrade/part that generally goes with class "A" products, but is for one reason or another not compatible with Manufacturer A's particular "A" class product. However, a good chunk (60%-70%) of the products/parts in the database can have their compatibility inferenced by their properties, For example: Part 1 is type "A" with "X" mm receiver and part 2 is also Type "A" with "X" mm interface and thus the two parts are compatible.. or Part 1 is a 8mm gear, thus all bushings of 8mm from any manufacturer is compatible with part 1. Further more, all gears can only have compatibility relationships in the database with bushing and gear boxes, but there can be no meaningful compatibility between a gear and a rail, or receiver since those parts don't interface. Now what I want is an AI to be able to learn from the decisions of the crowdsourced platform community and be able to inference compatibility for new parts/products based on their tagged attributes, what type of part they are etc. What would be the best form of AI to tackle this? I was thinking a Expert System, but explicitly engineering all of the knowledge rules would be daunting because of the complex relations between literally tens of thousands of parts, hundreds of part types and many manufacturers. Would a ANN (neural network) be ideal to learn from the many inputs/decisions of the crowdsource platform users? Any help/input is much appreciated.

    Read the article

  • Ubuntu Wireless not working on Lenovo t400

    - by VmaxBoss
    This problem started after upgrading to 12.04, an my system is 'up2date' Have tried most of the solution-proposals found on the net. lspci -nnk | grep -iA2 net 00:19.0 Ethernet controller [0200]: Intel Corporation 82567LF Gigabit Network Connection [8086:10bf] (rev 03) Subsystem: Lenovo Device [17aa:20ee] Kernel driver in use: e1000e 03:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection [8086:4237] Subsystem: Intel Corporation WiFi Link 5100 AGN [8086:1211] Kernel driver in use: iwlagn iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off sudo lshw -C network *-network description: Ethernet interface product: 82567LF Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 03 serial: 00:22:68:1a:c4:75 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.0.2-k2 duplex=full firmware=1.8-3 ip=192.168.2.154 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:29 memory:fc000000-fc01ffff memory:fc024000-fc024fff ioport:1820(size=32) *-network DISABLED description: Wireless interface product: PRO/Wireless 5100 AGN [Shiloh] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 00:26:c6:6c:2d:24 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn latency=0 multicast=yes wireless=IEEE 802.11abgn resources: irq:30 memory:f4300000-f4301fff Please help Br/VB

    Read the article

  • Ubuntu 13.04 can only work in recovery mode

    - by zhangyangyu
    I have just updated my 12.10 to 13.04. Everything is updated. But I can only boot to a black screen. I mean after the GRUB interface and purple screen. And I can hear the voice of the password interface. But it is only the black screen. It all works OK in 12.10. But it can work in the recovery mode. I mean go into the recovery mode and choose resume. And then everything is OK. But when loading kernel, the screen is dirty. I don't know why and I have Googled a lot. But no resolutions works. My graphics card is Intel GMA HD 4000, tested as VESA: Intel® Sandybridge/Ivybridge Graphics. I have been trapped in this for a whole day. I do need help. Hope someone can help me. By the way, the kernel is 3.8.0-19 if it helps.

    Read the article

  • After upgrade to 12.04 wireless keeps dropping on BCM4312

    - by Sheket
    I know there are plenty of questions very similar to these, but I've tried practically everything and it still isn't working. Some solutions get the wireless connection working, but it goes very slow and drops after a few minutes. Then it won't reconnect and keeps asking for password. Hope you can help me. Thanks in advance. This is the output for sudo lshw -C network *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:f0300000-f0303fff *-network description: Ethernet interface product: AR8132 Fast Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: c0 serial: 00:23:5a:9b:6e:b1 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.0.106 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:42 memory:f0200000-f023ffff ioport:a000(size=128) *-network description: Wireless interface physical id: 1 logical name: wlan0 serial: 00:24:2c:83:f0:81 capabilities: ethernet physical wireless configuration: broadcast=yes driver=b43 driverversion=3.2.0-30-generic firmware=478.104 link=no multicast=yes wireless=IEEE 802.11bg And for lsmod | grep b43 b43 342643 0 mac80211 436455 1 b43 cfg80211 178679 2 b43,mac80211 bcma 25651 1 b43 ssb 50691 1 b43 And for rfkill list 5: phy0: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • Top down or bottom up approach?

    - by george_zakharov
    this is the closest I think I can get to define my problem. I'm an interface designer and now I'm working with a team of programmers to develop a new CMS for a heavy media site. I'm sorry if it's a very, very dumb question to ask here, but I really need some help. As I've started developing a specification list for a prototype it turned out a very big one. I do realize now that the client-side will be JS heavy, with lots of DnD and other "cool designer stuff". The CMS will even include a small project management system for its users, nothing big like Basecamp, but with a live feed of comments etc. The problem is the the team has now separated. Someone is proposing the existing backend solution used in other CMS, someone is proposing to rewrite everything from scratch. The point to keep the code is that it is faster, the point to rewrite is to make it better for the proposed design (include Node.js and other stuff I don't actually get). The question is — can the UI specs influence back-end? The guys that propose to use the existing solution did everything with the Yii framework (as far as I know), and they say that everything on server is not affected by this "interface coolness". Others say that it does, that even autosave can't work without server load. Please, if this is really incomprehensible, again, I'm sorry, and I'll happy to clarify it after your questions. Thanks in advance

    Read the article

  • Configuring network to set wlan0 as primary

    - by Sheed
    I recently had to rebuild my pc and decided to go for ubuntu 14.04. I think the mistake I made was I started from a 12.04 install disk instead of the 12.10 disk I'd used previously and when given the option set my primary connection as ethernet (because the wireless option didn't work). After upgrading to 14.04 etc, I managed to get the wireless working, or more using steps like ifconfig -a and the likes I managed to prove that the wireless card etc. is all installed and working. However every time I boot without a hard wired connection plugged in I get the message "waiting for network configuration". I can then once it's booted without a network get my wirless working using sudo ifconfig wlan0 up iwlist wlan0 scan This seems to kick the wireless module into life and it appears in the GUI and I can then select a network, however all the options like edit network and disconnect etc are all greyed out. What I would like of course is if the WLAN0 was just set as my primary default network so I've been looking for a solution to this and it would seem that I need to adjust the old /etc/network/interfaces file but when I try to do so using the sudo vi /etc/network/interfaces command I, well I simply have no idea what I'm doing. Other than that typing :q! gets me out of there before I do to much damage! As far as I can tell (by navigating to the file in the GUI) the output of my /etc/network/interfaces is as follows: (obviously not including the " in each line that's just to break the heading rule of the #) "# This file describes the network interfaces available on your system "# and how to activate them. For more information, see interfaces(5). "# The loopback network interface auto lo iface lo inet loopback "# The primary network interface auto eth0 iface eth0 inet dhcp If this is the case then this clearly doesn't contain what it should do but I don't how to fix it. Nor do I even know if I'm on the right track. Any help would be appreciated thanks :)

    Read the article

  • Alternatives to PHP [closed]

    - by kaz
    We are starting a project, which goal is to create new frontend interface to our product. Old version was created in PHP, very poorly written. We are choosing the language and frameworks that we want to use in new version. Requirements: New interface will be communicating with API. Application will not have it's own database. We don't have a big team, 3 max programmers for entire project. The main programmers are PHP veterans and knows some other technologies (Rails, C, C++, some Java) but not in professional level. But overall they are good and experienced programmers. So: We want to find a good alternative to PHP. I like Rails very much, but whole ActiveRecord model will be useless, when using application API. Java needs a lot of configuration and someone who is expert in Java to properly run this project. Also, in Java there are a lot of big and complicated enterprise frameworks - not very good for 2-3 programmers team. Python - I don't know Python and don't know good and experienced programmers who knows PY - but it's not so complicated and big as Java and maybe in long period it's good alternative for PHP. What are your thoughts?

    Read the article

  • Learning project Custom c# Cms [closed]

    - by user313378
    I want to start new project customCms, cause I think it's a good starting point to implement my collected knowledge from c#, ddd, nhibernate, mvc3, js. It will be great if I hear some guidlines from expirienced users here. I will use C# ASP.NET MVC3 razor view engine. Also I was thinking of NHibernate ORM, I dont know if using Nhibernate will cause performanse down. Initially MSSQL 2008 will be used, but using ORM layer cause that I can switch to some other db with no pain. I was thinking to create News entity which will have properties Id Name Created Updated IntroText Content Title Author ListPhotos Every input will be validated with untroub. java script on the view, and it will be validated on db level as well. Maybe it is best approach to create some interface which will be implemented by my cmsClient entity like NewsEntity. In this interface will be included everything it should be requested from my client in future. At least some stuff which are not included in entity right now, consumed data by rss feed, wcf, etc. So basically everything you think its good idea from documentating project, to coding. Everyone is welcomed to brainstorm for custom cms.

    Read the article

  • How to take the snapshot of a IE webpage through a BHO (C#)

    - by Kapil
    Hi, I am trying to build an IE BHO in C# for taking the snapshot of a webpage loaded in the IE browser. Here is what I'm trying to do: public class ShowToolbarBHO : BandObjectLib.IObjectWithSite { IWebBrowser2 webBrowser = null; public void SetSite (Object site) { ....... if (site != null) { ...... webBrowser = (IWebBrowser2)site; ...... } } } Also, I p/invoke the following COM methods: [Guid("0000010D-0000-0000-C000-000000000046")] [InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)] [ComImportAttribute()] public interface IViewObject { void Draw([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, [In] IntPtr ptd, IntPtr hdcTargetDev, IntPtr hdcDraw, [MarshalAs(UnmanagedType.LPStruct)] ref COMRECT lprcBounds, [In] IntPtr lprcWBounds, IntPtr pfnContinue, int dwContinue); int GetColorSet([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, [In] IntPtr ptd, IntPtr hicTargetDev, [Out] IntPtr ppColorSet); int Freeze([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, out IntPtr pdwFreeze); int Unfreeze([MarshalAs(UnmanagedType.U4)] int dwFreeze); int SetAdvise([MarshalAs(UnmanagedType.U4)] int aspects, [MarshalAs(UnmanagedType.U4)] int advf, [MarshalAs(UnmanagedType.Interface)] IAdviseSink pAdvSink); void GetAdvise([MarshalAs(UnmanagedType.LPArray)] out int[] paspects, [MarshalAs(UnmanagedType.LPArray)] out int[] advf, [MarshalAs(UnmanagedType.LPArray)] out IAdviseSink[] pAdvSink); } [StructLayoutAttribute(LayoutKind.Sequential)] public class COMRECT { public int left; public int top; public int right; public int bottom; public COMRECT() { } public COMRECT(int left, int top, int right, int bottom) { this.left = left; this.top = top; this.right = right; this.bottom = bottom; } } [InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)] [ComVisibleAttribute(true)] [GuidAttribute("0000010F-0000-0000-C000-000000000046")] [ComImportAttribute()] public interface IAdviseSink { void OnDataChange([In]IntPtr pFormatetc, [In]IntPtr pStgmed); void OnViewChange([MarshalAs(UnmanagedType.U4)] int dwAspect, [MarshalAs(UnmanagedType.I4)] int lindex); void OnRename([MarshalAs(UnmanagedType.Interface)] object pmk); void OnSave(); void OnClose(); } Now When I take the snapshot: I make a call CaptureWebScreenImage((IHTMLDocument2) webBrowser.document); public static Image CaptureWebScreenImage(IHTMLDocument2 myDoc) { int heightsize = (int)getDocumentAttribute(myDoc, "scrollHeight"); int widthsize = (int)getDocumentAttribute(myDoc, "scrollWidth"); Bitmap finalImage = new Bitmap(widthsize, heightsize); Graphics gFinal = Graphics.FromImage(finalImage); COMRECT rect = new COMRECT(); rect.left = 0; rect.top = 0; rect.right = widthsize; rect.bottom = heightsize; IntPtr hDC = gFinal.GetHdc(); IViewObject vO = myDoc as IViewObject; vO.Draw(1, -1, (IntPtr)0, (IntPtr)0, (IntPtr)0, (IntPtr)hDC, ref rect, (IntPtr)0, (IntPtr)0, 0); gFinal.ReleaseHdc(); gFinal.Dispose(); return finalImage; } I am not getting the image of the webpage. Rather I am getting an image with black background. I am not sure if this is the right way of doing it, but I found over the web that IViewObject::Draw method is used for taking the image of a webpage in IE. I was earlier doing the image capture using the Native PrintWindow() method as mentioned in the following codeproject's page - http://www.codeproject.com/KB/graphics/IECapture.aspx But the image size is humongous! I was trying to see if I can reduce the size by using other techniques. It would be great if someone can point out the mistakes (I am sure there would be many) in my code above. Thanks, Kapil

    Read the article

  • How to take the sanpshot of a IE webpage through a BHO (C#)

    - by Kapil
    Hi, I am trying to build an IE BHO in C# for taking the snapshot of a webpage loaded in the IE browser. Here is what I'm trying to do: public class ShowToolbarBHO : BandObjectLib.IObjectWithSite { IWebBrowser2 webBrowser = null; public void SetSite (Object site) { ....... if (site != null) { ...... webBrowser = (IWebBrowser2)site; ...... } } } Also, I p/invoke the following COM methods: [Guid("0000010D-0000-0000-C000-000000000046")] [InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)] [ComImportAttribute()] public interface IViewObject { void Draw([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, [In] IntPtr ptd, IntPtr hdcTargetDev, IntPtr hdcDraw, [MarshalAs(UnmanagedType.LPStruct)] ref COMRECT lprcBounds, [In] IntPtr lprcWBounds, IntPtr pfnContinue, int dwContinue); int GetColorSet([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, [In] IntPtr ptd, IntPtr hicTargetDev, [Out] IntPtr ppColorSet); int Freeze([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, out IntPtr pdwFreeze); int Unfreeze([MarshalAs(UnmanagedType.U4)] int dwFreeze); int SetAdvise([MarshalAs(UnmanagedType.U4)] int aspects, [MarshalAs(UnmanagedType.U4)] int advf, [MarshalAs(UnmanagedType.Interface)] IAdviseSink pAdvSink); void GetAdvise([MarshalAs(UnmanagedType.LPArray)] out int[] paspects, [MarshalAs(UnmanagedType.LPArray)] out int[] advf, [MarshalAs(UnmanagedType.LPArray)] out IAdviseSink[] pAdvSink); } [StructLayoutAttribute(LayoutKind.Sequential)] public class COMRECT { public int left; public int top; public int right; public int bottom; public COMRECT() { } public COMRECT(int left, int top, int right, int bottom) { this.left = left; this.top = top; this.right = right; this.bottom = bottom; } } [InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)] [ComVisibleAttribute(true)] [GuidAttribute("0000010F-0000-0000-C000-000000000046")] [ComImportAttribute()] public interface IAdviseSink { void OnDataChange([In]IntPtr pFormatetc, [In]IntPtr pStgmed); void OnViewChange([MarshalAs(UnmanagedType.U4)] int dwAspect, [MarshalAs(UnmanagedType.I4)] int lindex); void OnRename([MarshalAs(UnmanagedType.Interface)] object pmk); void OnSave(); void OnClose(); } Now When I take the snapshot: I make a call CaptureWebScreenImage((IHTMLDocument2) webBrowser.document); public static Image CaptureWebScreenImage(IHTMLDocument2 myDoc) { int heightsize = (int)getDocumentAttribute(myDoc, "scrollHeight"); int widthsize = (int)getDocumentAttribute(myDoc, "scrollWidth"); Bitmap finalImage = new Bitmap(widthsize, heightsize); Graphics gFinal = Graphics.FromImage(finalImage); COMRECT rect = new COMRECT(); rect.left = 0; rect.top = 0; rect.right = widthsize; rect.bottom = heightsize; IntPtr hDC = gFinal.GetHdc(); IViewObject vO = myDoc as IViewObject; vO.Draw(1, -1, (IntPtr)0, (IntPtr)0, (IntPtr)0, (IntPtr)hDC, ref rect, (IntPtr)0, (IntPtr)0, 0); gFinal.ReleaseHdc(); gFinal.Dispose(); return finalImage; } I am not getting the image of the webpage. Rather I am getting an image with black background. I am not sure if this is the right way of doing it, but I found over the web that IViewObject::Draw method is used for taking the image of a webpage in IE. I was earlier doing the image capture using the Native PrintWindow() method as mentioned in the following codeproject's page - http://www.codeproject.com/KB/graphics/IECapture.aspx But the image size is humongous! I was trying to see if I can reduce the size by using other techniques. It would be great if someone can point out the mistakes (I am sure there would be many) in my code above. Thanks, Kapil

    Read the article

  • Is Multicast broken for Android 2.0.1 (currently on the DROID) or am I missing something?

    - by Gubatron
    This code works perfectly in Ubuntu, in Windows and MacOSX, it also works fine with a Nexus-One currently running firmware 2.1.1. I start sending and listening multicast datagrams, and all the computers and the Nexus-One will see each other perfectly. Then I run the same code on a Droid (Firmware 2.0.1), and everybody will get the packets sent by the Droid, but the droid will listen only to it's own packets. This is the run() method of a thread that's constantly listening on a Multicast group for incoming packets sent to that group. I'm running my tests on a local network where I have multicast support enabled in the router. My goal is to have devices meet each other as they come on line by broadcasting packages to a multicast group. public void run() { byte[] buffer = new byte[65535]; DatagramPacket dp = new DatagramPacket(buffer, buffer.length); try{ MulticastSocket ms = new MulticastSocket(_port); ms.setNetworkInterface(_ni); //non loopback network interface passed ms.joinGroup(_ia); //the multicast address, currently 224.0.1.16 Log.v(TAG,"Joined Group " + _ia); while (true) { ms.receive(dp); String s = new String(dp.getData(),0,dp.getLength()); Log.v(TAG,"Received Package on "+ _ni.getName() +": " + s); Message m = new Message(); Bundle b = new Bundle(); b.putString("event", "Listener ("+_ni.getName()+"): \"" + s + "\""); m.setData(b); dispatchMessage(m); //send to ui thread } } catch (SocketException se) { System.err.println(se); } catch (IOException ie) { System.err.println(ie); } } Over here, is the code that sends the Multicast Datagram out of every valid network interface available (that's not the loopback interface). public void sendPing() { MulticastSocket ms = null; try { ms = new MulticastSocket(_port); ms.setTimeToLive(TTL_GLOBAL); List<NetworkInterface> interfaces = getMulticastNonLoopbackNetworkInterfaces(); for (NetworkInterface iface : interfaces) { //skip loopback if (iface.getName().equals("lo")) continue; ms.setNetworkInterface(iface); _buffer = ("FW-"+ _name +" PING ("+iface.getName()+":"+iface.getInetAddresses().nextElement()+")").getBytes(); DatagramPacket dp = new DatagramPacket(_buffer, _buffer.length,_ia,_port); ms.send(dp); Log.v(TAG,"Announcer: Sent packet - " + new String(_buffer) + " from " + iface.getDisplayName()); } } catch (IOException e) { e.printStackTrace(); } catch (Exception e2) { e2.printStackTrace(); } } Update (April 2nd 2010) I found a way to have the Droid's network interface to communicate using Multicast! _wifiMulticastLock = ((WifiManager) context.getSystemService(Context.WIFI_SERVICE)).createMulticastLock("multicastLockNameHere"); _wifiMulticastLock.acquire(); Then when you're done... if (_wifiMulticastLock != null && _wifiMulticastLock.isHeld()) _wifiMulticastLock.release(); After I did this, the Droid started sending and receiving UDP Datagrams on a Multicast group. gubatron

    Read the article

  • How to have two UIPickerViews together in one ViewController?

    - by 0SX
    I'm trying to put 2 UIPickerViews together in one ViewController. Each UIPickerView has different data arrays. I'm using interface builder to link the pickers up. I know I'll have to use separate delegates and dataSources but I can't seem to hook everything up with interface builder correctly. Here's all my code: pickerTesting.h #import <UIKit/UIKit.h> #import "picker2DataSource.h" @interface pickerTestingViewController : UIViewController <UIPickerViewDelegate, UIPickerViewDataSource>{ IBOutlet UIPickerView *picker; IBOutlet UIPickerView *picker2; NSMutableArray *pickerViewArray; } @property (nonatomic, retain) IBOutlet UIPickerView *picker; @property (nonatomic, retain) IBOutlet UIPickerView *picker2; @property (nonatomic, retain) NSMutableArray *pickerViewArray; @end pickerTesting.m #import "pickerTestingViewController.h" @implementation pickerTestingViewController @synthesize picker, picker2, pickerViewArray; - (void)viewDidLoad { [super viewDidLoad]; pickerViewArray = [[NSMutableArray alloc] init]; [pickerViewArray addObject:@" 100 "]; [pickerViewArray addObject:@" 200 "]; [pickerViewArray addObject:@" 400 "]; [pickerViewArray addObject:@" 600 "]; [pickerViewArray addObject:@" 1000 "]; [picker selectRow:1 inComponent:0 animated:NO]; picker2.delegate = self; picker2.dataSource = self; } - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)picker; { return 1; } - (void)pickerView:(UIPickerView *)picker didSelectRow:(NSInteger)row inComponent:(NSInteger)component { } - (NSInteger)pickerView:(UIPickerView *)picker numberOfRowsInComponent:(NSInteger)component; { return [pickerViewArray count]; } - (NSString *)pickerView:(UIPickerView *)picker titleForRow:(NSInteger)row forComponent:(NSInteger)component; { return [pickerViewArray objectAtIndex:row]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end And I have a separate class for the other datasource. picker2DataSource.h @interface picker2DataSource : NSObject <UIPickerViewDataSource, UIPickerViewDelegate> { NSMutableArray *customPickerArray; } @property (nonatomic, retain) NSMutableArray *customPickerArray; @end picker2DataSource.m #import "picker2DataSource.h" @implementation picker2DataSource @synthesize customPickerArray; - (id)init { // use predetermined frame size self = [super init]; if (self) { customPickerArray = [[NSMutableArray alloc] init]; [customPickerArray addObject:@" a "]; [customPickerArray addObject:@" b "]; [customPickerArray addObject:@" c "]; [customPickerArray addObject:@" d "]; [customPickerArray addObject:@" e "]; } return self; } - (void)dealloc { [customPickerArray release]; [super dealloc]; } - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)picker2; { return 1; } - (void)pickerView:(UIPickerView *)picker2 didSelectRow:(NSInteger)row inComponent:(NSInteger)component { } - (NSInteger)pickerView:(UIPickerView *)picker2 numberOfRowsInComponent:(NSInteger)component; { return [customPickerArray count]; } - (NSString *)pickerView:(UIPickerView *)picker2 titleForRow:(NSInteger)row forComponent:(NSInteger)component; { return [customPickerArray objectAtIndex:row]; } @end Any help or code examples would be great. Thanks.

    Read the article

  • Duplex Contract GetCallbackChannel always returns a null-instance

    - by Yaroslav
    Hi! Here is the server code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.Runtime.Serialization; using System.ServiceModel.Description; namespace Console_Chat { [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IMyCallbackContract))] public interface IMyService { [OperationContract(IsOneWay = true)] void NewMessageToServer(string msg); [OperationContract(IsOneWay = false)] bool ServerIsResponsible(); } [ServiceContract] public interface IMyCallbackContract { [OperationContract(IsOneWay = true)] void NewMessageToClient(string msg); [OperationContract(IsOneWay = true)] void ClientIsResponsible(); } [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] public class MyService : IMyService { public IMyCallbackContract callback = null; /* { get { return OperationContext.Current.GetCallbackChannel<IMyCallbackContract>(); } } */ public MyService() { callback = OperationContext.Current.GetCallbackChannel<IMyCallbackContract>(); } public void NewMessageToServer(string msg) { Console.WriteLine(msg); } public void NewMessageToClient( string msg) { callback.NewMessageToClient(msg); } public bool ServerIsResponsible() { return true; } } class Server { static void Main(string[] args) { String msg = "none"; ServiceMetadataBehavior behavior = new ServiceMetadataBehavior(); ServiceHost serviceHost = new ServiceHost( typeof(MyService), new Uri("http://localhost:8080/")); serviceHost.Description.Behaviors.Add(behavior); serviceHost.AddServiceEndpoint( typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexHttpBinding(), "mex"); serviceHost.AddServiceEndpoint( typeof(IMyService), new WSDualHttpBinding(), "ServiceEndpoint" ); serviceHost.Open(); Console.WriteLine("Server is up and running"); MyService server = new MyService(); server.NewMessageToClient("Hey client!"); /* do { msg = Console.ReadLine(); // callback.NewMessageToClient(msg); } while (msg != "ex"); */ Console.ReadLine(); } } } Here is the client's: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.Runtime.Serialization; using System.ServiceModel.Description; using Console_Chat_Client.MyHTTPServiceReference; namespace Console_Chat_Client { [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IMyCallbackContract))] public interface IMyService { [OperationContract(IsOneWay = true)] void NewMessageToServer(string msg); [OperationContract(IsOneWay = false)] bool ServerIsResponsible(); } [ServiceContract] public interface IMyCallbackContract { [OperationContract(IsOneWay = true)] void NewMessageToClient(string msg); [OperationContract(IsOneWay = true)] void ClientIsResponsible(); } public class MyCallback : Console_Chat_Client.MyHTTPServiceReference.IMyServiceCallback { static InstanceContext ctx = new InstanceContext(new MyCallback()); static MyServiceClient client = new MyServiceClient(ctx); public void NewMessageToClient(string msg) { Console.WriteLine(msg); } public void ClientIsResponsible() { } class Client { static void Main(string[] args) { String msg = "none"; client.NewMessageToServer(String.Format("Hello server!")); do { msg = Console.ReadLine(); if (msg != "ex") client.NewMessageToServer(msg); else client.NewMessageToServer(String.Format("Client terminated")); } while (msg != "ex"); } } } } callback = OperationContext.Current.GetCallbackChannel(); This line constanly throws a NullReferenceException, what's the problem? Thanks!

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • ASP.NET how to implement IServiceLayer

    - by rockinthesixstring
    I'm trying to follow the tutorial found here to implement a service layer in my MVC application. What I can't figure out is how to wire it all up. here's what I have so far. IUserRepository.vb Namespace Data Public Interface IUserRepository Sub AddUser(ByVal openid As String) Sub UpdateUser(ByVal id As Integer, ByVal about As String, ByVal birthdate As DateTime, ByVal openid As String, ByVal regionid As Integer, ByVal username As String, ByVal website As String) Sub UpdateUserReputation(ByVal id As Integer, ByVal AmountOfReputation As Integer) Sub DeleteUser(ByVal id As Integer) Function GetAllUsers() As IList(Of User) Function GetUserByID(ByVal id As Integer) As User Function GetUserByOpenID(ByVal openid As String) As User End Interface End Namespace UserRepository.vb Namespace Data Public Class UserRepository : Implements IUserRepository Private dc As DataDataContext Public Sub New() dc = New DataDataContext End Sub #Region "IUserRepository Members" Public Sub AddUser(ByVal openid As String) Implements IUserRepository.AddUser Dim user = New User user.LastSeen = DateTime.Now user.MemberSince = DateTime.Now user.OpenID = openid user.Reputation = 0 user.UserName = String.Empty dc.Users.InsertOnSubmit(user) dc.SubmitChanges() End Sub Public Sub UpdateUser(ByVal id As Integer, ByVal about As String, ByVal birthdate As Date, ByVal openid As String, ByVal regionid As Integer, ByVal username As String, ByVal website As String) Implements IUserRepository.UpdateUser Dim user = (From u In dc.Users Where u.ID = id Select u).Single user.About = about user.BirthDate = birthdate user.LastSeen = DateTime.Now user.OpenID = openid user.RegionID = regionid user.UserName = username user.WebSite = website dc.SubmitChanges() End Sub Public Sub UpdateUserReputation(ByVal id As Integer, ByVal AmountOfReputation As Integer) Implements IUserRepository.UpdateUserReputation Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault ''# Simply take the current reputation from the select statement ''# and add the proper "AmountOfReputation" user.Reputation = user.Reputation + AmountOfReputation dc.SubmitChanges() End Sub Public Sub DeleteUser(ByVal id As Integer) Implements IUserRepository.DeleteUser Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault dc.Users.DeleteOnSubmit(user) dc.SubmitChanges() End Sub Public Function GetAllUsers() As System.Collections.Generic.IList(Of User) Implements IUserRepository.GetAllUsers Dim users = From u In dc.Users Select u Return users.ToList End Function Public Function GetUserByID(ByVal id As Integer) As User Implements IUserRepository.GetUserByID Dim user = (From u In dc.Users Where u.ID = id Select u).FirstOrDefault Return user End Function Public Function GetUserByOpenID(ByVal openid As String) As User Implements IUserRepository.GetUserByOpenID Dim user = (From u In dc.Users Where u.OpenID = openid Select u).FirstOrDefault Return user End Function #End Region End Class End Namespace IUserService.vb Namespace Data Interface IUserService End Interface End Namespace UserService.vb Namespace Data Public Class UserService : Implements IUserService Private _ValidationDictionary As IValidationDictionary Private _repository As IUserRepository Public Sub New(ByVal validationDictionary As IValidationDictionary, ByVal repository As IUserRepository) _ValidationDictionary = validationDictionary _repository = repository End Sub Protected Function ValidateUser(ByVal UserToValidate As User) As Boolean Dim isValid As Boolean = True If UserToValidate.OpenID.Trim().Length = 0 Then _ValidationDictionary.AddError("OpenID", "OpenID is Required") isValid = False End If If UserToValidate.MemberSince = Nothing Then _ValidationDictionary.AddError("MemberSince", "MemberSince is Required") isValid = False End If If UserToValidate.LastSeen = Nothing Then _ValidationDictionary.AddError("LastSeen", "LastSeen is Required") isValid = False End If If UserToValidate.Reputation = Nothing Then _ValidationDictionary.AddError("Reputation", "Reputation is Required") isValid = False End If Return isValid End Function End Class End Namespace I have also wired up the IValidationDictionary.vb and the ModelStateWrapper.vb as described in the article above. What I'm having a problem with is actually implementing it in my controller. My controller looks something like this. Public Class UsersController : Inherits BaseController Private UserService As Data.IUserService Public Sub New() UserService = New Data.UserService(New Data.ModelStateWrapper(Me.ModelState), New Data.UserRepository) End Sub Public Sub New(ByVal service As Data.IUserService) UserService = service End Sub .... End Class however on the line that says Public Sub New(ByVal service As Data.IUserService) I'm getting an error 'service' cannot expose type 'Data.IUserService' outside the project through class 'UsersController' So my question is TWO PARTS How can I properly implement a Service Layer in my application using the concepts from that article? Should there be any content within my IUserService.vb?

    Read the article

  • random generator to obtain data from array not displaying

    - by Yang Jie Domodomo
    I know theres a better solution using arc4random (it's on my to-try-out-function list), but I wanted to try out using the rand() and stand(time(NULL)) function first. I've created a NSMutableArray and chuck it with 5 data. Testing out how many number it has was fine. But when I tried to use the button function to load the object it return me with object <sampleData: 0x9a2f0e0> - (IBAction)generateNumber:(id)sender { srand(time(NULL)); NSInteger number =rand()% ds.count ; label.text = [NSString stringWithFormat:@"object %@", [ds objectAtIndex:number] ]; NSLog(@"%@",label.text); } While I feel the main cause is the method itself, I've paste the rest of the code below just incase i made any error somewhere. ViewController.h #import <UIKit/UIKit.h> #import "sampleData.h" #import "sampleDataDAO.h" @interface ViewController : UIViewController @property (weak, nonatomic) IBOutlet UILabel *label; @property (weak, nonatomic) IBOutlet UIButton *onHitMePressed; - (IBAction)generateNumber:(id)sender; @property(nonatomic, strong) sampleDataDAO *daoDS; @property(nonatomic, strong) NSMutableArray *ds; @end ViewController.m #import "ViewController.h" @interface ViewController () @end @implementation ViewController @synthesize label; @synthesize onHitMePressed; @synthesize daoDS,ds; - (void)viewDidLoad { [super viewDidLoad]; daoDS = [[sampleDataDAO alloc] init]; self.ds = daoDS.PopulateDataSource; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [self setLabel:nil]; [self setOnHitMePressed:nil]; [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } - (IBAction)generateNumber:(id)sender { srand(time(NULL)); NSInteger number =rand()% ds.count ; label.text = [NSString stringWithFormat:@"object %@", [ds objectAtIndex:number] ]; NSLog(@"%@",label.text); } @end sampleData.h #import <Foundation/Foundation.h> @interface sampleData : NSObject @property (strong,nonatomic) NSString * object; @end sampleData.m #import "sampleData.h" @implementation sampleData @synthesize object; @end sampleDataDAO.h #import <Foundation/Foundation.h> #import "sampleData.h" @interface sampleDataDAO : NSObject @property(strong,nonatomic)NSMutableArray*someDataArray; -(NSMutableArray *)PopulateDataSource; @end sampleDataDAO.m #import "sampleDataDAO.h" @implementation sampleDataDAO @synthesize someDataArray; -(NSMutableArray *)PopulateDataSource { someDataArray = [[NSMutableArray alloc]init]; sampleData * myData = [[sampleData alloc]init]; myData.object= @"object 1"; [someDataArray addObject:myData]; myData=nil; myData = [[sampleData alloc] init]; myData.object= @"object 2"; [someDataArray addObject:myData]; myData=nil; myData = [[sampleData alloc] init]; myData.object= @"object 3"; [someDataArray addObject:myData]; myData=nil; myData = [[sampleData alloc] init]; myData.object= @"object 4"; [someDataArray addObject:myData]; myData=nil; myData = [[sampleData alloc] init]; myData.object= @"object 5"; [someDataArray addObject:myData]; myData=nil; return someDataArray; } @end

    Read the article

  • I'm trying to pass a string from my first ViewController to my second ViewController but it returns NULL

    - by Dashony
    In my first view controller I have 3 input fields each of them take the user input into and saves it into a string such as: address, username and password as NSUserDefaults. This part works fine. In my second view controller I'm trying to take the 3 strings from first controller (address, username and password) create a html link based on the 3 strings. I've tried many ways to access the 3 strings with no luck, the result I get is NULL. Here is my code: //.h file - first view controller with the 3 input fields CamSetup.h #import <UIKit/UIKit.h> @interface CamSetup : UIViewController <UITextFieldDelegate> { NSString * address; NSString * username; NSString * password; IBOutlet UITextField * addressField; IBOutlet UITextField * usernameField; IBOutlet UITextField * passwordField; } -(IBAction) saveAddress: (id) sender; -(IBAction) saveUsername: (id) sender; -(IBAction) savePassword: (id) sender; @property(nonatomic, retain) UITextField *addressField; @property(nonatomic, retain) UITextField *usernameField; @property(nonatomic, retain) UITextField *passwordField; @property(nonatomic, retain) NSString *address; @property(nonatomic, retain) NSString *username; @property(nonatomic, retain) NSString *password; @end //.m file - first view controller CamSetup.m #import "CamSetup.h" @interface CamSetup () @end @implementation CamSetup @synthesize addressField, usernameField, passwordField, address, username, password; -(IBAction) saveAddress: (id) sender { address = [[NSString alloc] initWithFormat:addressField.text]; [addressField setText:address]; NSUserDefaults *stringDefaultAddress = [NSUserDefaults standardUserDefaults]; [stringDefaultAddress setObject:address forKey:@"stringKey1"]; NSLog(@"String [%@]", address); } -(IBAction) saveUsername: (id) sender { username = [[NSString alloc] initWithFormat:usernameField.text]; [usernameField setText:username]; NSUserDefaults *stringDefaultUsername = [NSUserDefaults standardUserDefaults]; [stringDefaultUsername setObject:username forKey:@"stringKey2"]; NSLog(@"String [%@]", username); } -(IBAction) savePassword: (id) sender { password = [[NSString alloc] initWithFormat:passwordField.text]; [passwordField setText:password]; NSUserDefaults *stringDefaultPassword = [NSUserDefaults standardUserDefaults]; [stringDefaultPassword setObject:password forKey:@"stringKey3"]; NSLog(@"String [%@]", password); } - (void)viewDidLoad { [addressField setText:[[NSUserDefaults standardUserDefaults] objectForKey:@"stringKey1"]]; [usernameField setText:[[NSUserDefaults standardUserDefaults] objectForKey:@"stringKey2"]]; [passwordField setText:[[NSUserDefaults standardUserDefaults] objectForKey:@"stringKey3"]]; [super viewDidLoad]; } @end //.h second view controller LiveView.h #import <UIKit/UIKit.h> #import "CamSetup.h" @interface LiveView : UIViewController { NSString *theAddress; NSString *theUsername; NSString *thePassword; CamSetup *camsetup; //here is an instance of the first class } @property (nonatomic, retain) NSString *theAddress; @property (nonatomic, retain) NSString *theUsername; @property (nonatomic, retain) NSString *thePassword; @end //.m second view LiveView.m file #import "LiveView.h" @interface LiveView () @end @implementation LiveView @synthesize theAddress, theUsername, thePassword; - (void)viewDidLoad { [super viewDidLoad]; theUsername = camsetup.username; //this is probably not right? NSLog(@"String [%@]", theUsername); //resut here is NULL NSLog(@"String [%@]", camsetup.username); //and here NULL as well } @end

    Read the article

  • Delphi hook to redirect to different ip

    - by Chris
    What is the best way to redirect ANY browser to a different ip for specific sites? For example if the user will type www.facebook.com in any browser he will be redirected to 127.0.0.1. Also the same should happen if he will type 66.220.146.11. What I have until now is this: using the winpkfilter I am able to intercept all the traffic on port 80, with type(in or out), source ip, destination ip and packet. My problem is to modify somehow the packet so the browser will be redirected. This is the code that i have right now: program Pass; {$APPTYPE CONSOLE} uses SysUtils, Windows, Winsock, winpkf, iphlp; var iIndex, counter : DWORD; hFilt : THANDLE; Adapts : TCP_AdapterList; AdapterMode : ADAPTER_MODE; Buffer, ParsedBuffer : INTERMEDIATE_BUFFER; ReadRequest : ETH_REQUEST; hEvent : THANDLE; hAdapter : THANDLE; pEtherHeader : TEtherHeaderPtr; pIPHeader : TIPHeaderPtr; pTcpHeader : TTCPHeaderPtr; pUdpHeader : TUDPHeaderPtr; SourceIP, DestIP : TInAddr; thePacket : PChar; f : TextFile; SourceIpString, DestinationIpString : string; SourceName, DestinationName : string; function IPAddrToName(IPAddr : string) : string; var SockAddrIn : TSockAddrIn; HostEnt : PHostEnt; WSAData : TWSAData; begin WSAStartup($101, WSAData); SockAddrIn.sin_addr.s_addr := inet_addr(PChar(IPAddr)); HostEnt := gethostbyaddr(@SockAddrIn.sin_addr.S_addr, 4, AF_INET); if HostEnt < nil then begin result := StrPas(Hostent^.h_name) end else begin result := ''; end; end; procedure ReleaseInterface(); begin // Restore default mode AdapterMode.dwFlags := 0; AdapterMode.hAdapterHandle := hAdapter; SetAdapterMode(hFilt, @AdapterMode); // Set NULL event to release previously set event object SetPacketEvent(hFilt, hAdapter, 0); // Close Event if hEvent < 0 then CloseHandle(hEvent); // Close driver object CloseFilterDriver(hFilt); // Release NDISAPI FreeNDISAPI(); end; begin // Check the number of parameters if ParamCount() < 2 then begin Writeln('Command line syntax:'); Writeln(' PassThru.exe index num'); Writeln(' index - network interface index.'); Writeln(' num - number or packets to filter'); Writeln('You can use ListAdapters to determine correct index.'); Exit; end; // Initialize NDISAPI InitNDISAPI(); // Create driver object hFilt := OpenFilterDriver('NDISRD'); if IsDriverLoaded(hFilt) then begin // Get parameters from command line iIndex := StrToInt(ParamStr(1)); counter := StrToInt(ParamStr(2)); // Set exit procedure ExitProcessProc := ReleaseInterface; // Get TCP/IP bound interfaces GetTcpipBoundAdaptersInfo(hFilt, @Adapts); // Check paramer values if iIndex > Adapts.m_nAdapterCount then begin Writeln('There is no network interface with such index on this system.'); Exit; end; hAdapter := Adapts.m_nAdapterHandle[iIndex]; AdapterMode.dwFlags := MSTCP_FLAG_SENT_TUNNEL or MSTCP_FLAG_RECV_TUNNEL; AdapterMode.hAdapterHandle := hAdapter; // Create notification event hEvent := CreateEvent(nil, TRUE, FALSE, nil); if hEvent <> 0 then if SetPacketEvent(hFilt, hAdapter, hEvent) <> 0 then begin // Initialize request ReadRequest.EthPacket.Buffer := @Buffer; ReadRequest.hAdapterHandle := hAdapter; SetAdapterMode(hFilt, @AdapterMode); counter := 0; //while counter <> 0 do while true do begin WaitForSingleObject(hEvent, INFINITE); while ReadPacket(hFilt, @ReadRequest) <> 0 do begin //dec(counter); pEtherHeader := TEtherHeaderPtr(@Buffer.m_IBuffer); if ntohs(pEtherHeader.h_proto) = ETH_P_IP then begin pIPHeader := TIPHeaderPtr(Integer(pEtherHeader) + SizeOf(TEtherHeader)); SourceIP.S_addr := pIPHeader.SourceIp; DestIP.S_addr := pIPHeader.DestIp; if pIPHeader.Protocol = IPPROTO_TCP then begin pTcpHeader := TTCPHeaderPtr(Integer(pIPHeader) + (pIPHeader.VerLen and $F) * 4); if (pTcpHeader.SourcePort = htons(80)) or (pTcpHeader.DestPort = htons(80)) then begin inc(counter); if Buffer.m_dwDeviceFlags = PACKET_FLAG_ON_SEND then Writeln(counter, ') - MSTCP --> Interface') else Writeln(counter, ') - Interface --> MSTCP'); Writeln(' Packet size = ', Buffer.m_Length); Writeln(Format(' IP %.3u.%.3u.%.3u.%.3u --> %.3u.%.3u.%.3u.%.3u PROTOCOL: %u', [byte(SourceIP.S_un_b.s_b1), byte(SourceIP.S_un_b.s_b2), byte(SourceIP.S_un_b.s_b3), byte(SourceIP.S_un_b.s_b4), byte(DestIP.S_un_b.s_b1), byte(DestIP.S_un_b.s_b2), byte(DestIP.S_un_b.s_b3), byte(DestIP.S_un_b.s_b4), byte(pIPHeader.Protocol)] )); Writeln(Format(' TCP SRC PORT: %d DST PORT: %d', [ntohs(pTcpHeader.SourcePort), ntohs(pTcpHeader.DestPort)])); //get the data thePacket := pchar(pEtherHeader) + (sizeof(TEtherHeaderPtr) + pIpHeader.VerLen * 4 + pTcpHeader.Offset * 4); { SourceIpString := IntToStr(byte(SourceIP.S_un_b.s_b1)) + '.' + IntToStr(byte(SourceIP.S_un_b.s_b2)) + '.' + IntToStr(byte(SourceIP.S_un_b.s_b3)) + '.' + IntToStr(byte(SourceIP.S_un_b.s_b4)); DestinationIpString := IntToStr(byte(DestIP.S_un_b.s_b1)) + '.' + IntToStr(byte(DestIP.S_un_b.s_b2)) + '.' + IntToStr(byte(DestIP.S_un_b.s_b3)) + '.' + IntToStr(byte(DestIP.S_un_b.s_b4)); } end; end; end; // if ntohs(pEtherHeader.h_proto) = ETH_P_RARP then // Writeln(' Reverse Addr Res packet'); // if ntohs(pEtherHeader.h_proto) = ETH_P_ARP then // Writeln(' Address Resolution packet'); //Writeln('__'); if Buffer.m_dwDeviceFlags = PACKET_FLAG_ON_SEND then // Place packet on the network interface SendPacketToAdapter(hFilt, @ReadRequest) else // Indicate packet to MSTCP SendPacketToMstcp(hFilt, @ReadRequest); { if counter = 0 then begin Writeln('Filtering complete'); readln; break; end; } end; ResetEvent(hEvent); end; end; end; end.

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Example WLST Script to Obtain JDBC and JTA MBean Values

    - by Daniel Mortimer
    Introduction Following on from the blog entry "Get an Offline or Online WebLogic Domain Summary Using WLST!", I have had a request to create a smaller example which only collects a selection of JDBC (System Resource) and JTA configuration and runtime MBeans values. So, here it is. Download Sample Script You can grab the sample script by clicking here. Instructions to Run: 1. After download, extract the zip to the machine hosting the WebLogic environment. You should have three directories along with a readme.txt output Sample_Output scripts 2. In the scripts directory, find the start wrapper script startWLSTJDBCSummarizer.sh (Unix) or startWLSTJDBCSummarizer.cmd (MS Windows). Open the appropriate file in an editor and change the environment variable settings to suit your system. Example - startWLSTDomainSummarizer.cmd set WL_HOME=D:\product\FMW11g\wlserver_10.3 set DOMAIN_HOME=D:\product\FMW11g\user_projects\domains\MyDomain set WLST_OUTPUT_PATH=D:\WLSTDomainSummarizer\output\ set WLST_OUTPUT_FILE=WLST_JDBC_Summary_Via_MBeans.html call "%WL_HOME%\common\bin\wlst.cmd" WLS_JDBC_Summary_Online.py Note: The WLST_OUTPUT_PATH directory value must have a trailing slash. If there is no trailing slash, the script will error and not continue.  3. Run the shell / command line wrapper script. It should launch WLST and kick off "WLS_JDBC_Summary_Online.py". This will hit you with some prompts e.g. Is your domain Admin Server up and running and do you have the connection details? (Y /N ): Y Enter connection URL to Admin Server e.g t3://mymachine.acme.com:7001 : t3://localhost:7001 Enter weblogic username: weblogic Enter weblogic username password (function prompt 1): welcome1 (Note: the value typed in for password will not be echoed back to the console). 4. If the scripts run successfully, you should get a HTML summary in the specified output directory. See example screenshots below: Screenshot 1 - JDBC System Resource Tab Page  Screenshot 2 - JTA Tab Page 5. For the HTML to render correctly, ensure the .js and .css files provided (review the output directory created by the zip file extraction) are accessible. For example, to view the HTML locally (without using a web server), place the HTML output, jquery-ui.js, spry.js and wlstsummarizer.css in the same directory. Disclaimer This is a sample script. I have tested it against WebLogic Server 10.3.6 domains on MS Windows and Unix.  I cannot guarantee that the script will run error free or produce the expected output on your system. If you have any feedback add a comment to the blog. I will endeavour to fix any problems with my WLST code. Credits JQuery: http://jquery.com/ Spry (Adobe) : https://github.com/adobe/Spryhttp://www.red-team-design.com/cool-headings-with-pseudo-elements

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >