Search Results

Search found 10675 results on 427 pages for 'dynamic proxy'.

Page 226/427 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • Sprinkle Some Magik on that Java Virtual Machine

    - by Jim Connors
    GE Energy, through its Smallworld subsidiary, has been providing geospatial software solutions to the utility and telco markets for over 20 years.  One of the fundamental building blocks of their technology is a dynamically-typed object oriented programming language called Magik.  Like Java, Magik source code is compiled down to bytecodes that run on a virtual machine -- in this case the Magik Virtual Machine. Throughout the years, GE has invested considerable engineering talent in the support and maintenance of this virtual machine.  At the same time vast energy and resources have been invested in the Java Virtual Machine. The question for GE has been whether to continue to make that investment on its own or to leverage massive effort provided by the Java community? Utilizing the Java Virtual Machine instead of maintaining its own virtual machine would give GE more opportunity to focus on application solutions.   At last count, there are dozens, perhaps hundreds of examples of programming languages that have been hosted atop the Java Virtual Machine.  Prior to the release of Java 7, that effort, although certainly possible, was generally less than optimal for languages like Magik because of its dynamic nature.  Java, as a statically typed language had little use for this capability.  In the quest to be a more universal virtual machine, Java 7, via JSR-292, introduced a new bytecode called invokedynamic.  In short, invokedynamic affords a more flexible method call mechanism needed by dynamic languages like Magik. With this new capability GE Energy has succeeded in hosting their Magik environment on top of the Java Virtual Machine.  So you may ask, why would GE wish to do such a thing?  The benefits are many: Competitors to GE Energy claimed that the Magik environment was proprietary.  By utilizing the Java Virtual Machine, that argument gets put to bed.  JVM development is done in open source, where contributions are made world-wide by all types of organizations and individuals. The unprecedented wealth of class libraries and applications written for the Java platform are now opened up to Magik/JVM platform as first class citizens. In addition, the Magik/JVM solution vastly increases the developer pool to include the 9 million Java developers -- the largest developer community on the planet. Applications running on the JVM showed substantial performance gains, in some cases as much as a 5x speed up over the original Magik platform. Legacy Magik applications can still run on the original platform.  They can be seamlessly migrated to run on the JVM by simply recompiling the source code. GE can now leverage the huge Java community.  Undeniably the best virtual machine ever created, hundreds if not thousands of world class developers continually improve, poke, prod and scrutinize all aspects of the Java platform.  As enhancements are made, GE automatically gains access to these. As Magik has little in the way of support for multi-threading, GE will benefit from current and future Java offerings (e.g. lambda expressions) that aim to further facilitate multi-core/multi-threaded application development. As the JVM is available for many more platforms, it broadens the reach of Magik, including the potential to run on a class devices never envisioned just a few short years ago.  For example, Java SE compatible runtime environments are available for popular embedded ARM/Intel/PowerPC configurations that could theoretically host this software too. As compared to other JVM language projects, the Magik integration differs in that it represents a serious commercial entity betting a sizable part of its business on the success of this effort.  Expect to see announcements not only from General Electric, but other organizations as they realize the benefits of utilizing the Java Virtual Machine.

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • How would you gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    I'm relatively new to StackExchange and not sure if it's appropriate place to ask design question. Site gives me a hint "The question you're asking appears subjective and is likely to be closed". Please let me know. Anyway.. One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting? Thank you very much in advance for your thoughts.

    Read the article

  • Replicating between Cloud and On-Premises using Oracle GoldenGate

    - by Ananth R. Tiru
    Do you have applications running on the cloud that you need to connect with the on premises systems. The most likely answer to this question is an astounding YES!  If so, then you understand the importance of keep the data fresh at all times across the cloud and on-premises environments. This is also one of the key focus areas for the new GoldenGate 12c release which we announced couple of week ago via a press release. Most enterprises have spent years avoiding the data “silos” that inhibit productivity. For example, an enterprise which has adopted a CRM strategy could be relying on an on-premises based marketing application used for developing and nurturing leads. At the same time it could be using a SaaS based Sales application to create opportunities and quotes. The sales and the marketing teams which use these systems need to be able to access and share the data in a reliable and cohesive way. This example can be extended to other applications areas such as HR, Supply Chain, and Finance and the demands the users place on getting a consistent view of the data. When it comes to moving data in hybrid environments some of the key requirements include minimal latency, reliability and security: Data must remain fresh. As data ages it becomes less relevant and less valuable—day-old data is often insufficient in today’s competitive landscape. Reliability must be guaranteed despite system or connectivity issues that can occur between the cloud and on-premises instances. Security is a key concern when replicating between cloud and on-premises instances. There are several options to consider when replicating between the cloud and on-premises instances. Option 1 – Secured network established between the cloud and on-premises A secured network is established between the cloud and on-premises which enables the applications (including replication software) running on the cloud and on-premises to have seamless connectivity to other applications irrespective of where they are physically located. Option 2 – Restricted network established between the cloud and on-premises A restricted network is established between the cloud and on-premises instances which enable certain ports (required by replication) be opened on both the cloud and on the on-premises instances and white lists the IP addresses of the cloud and on-premises instances. Option 3 – Restricted network access from on-premises and cloud through HTTP proxy This option can be considered when the ports required by the applications (including replication software) are not open and the cloud instance is not white listed on the on-premises instance. This option of tunneling through HTTP proxy may be only considered when proper security exceptions are obtained. Oracle GoldenGate Oracle GoldenGate is used for major Fortune 500 companies and other industry leaders worldwide to support mission-critical systems for data availability and integration. Oracle GoldenGate addresses the requirements for ensuring data consistency between cloud and on-premises instances, thus facilitating the business process to run effectively and reliably. The architecture diagram below illustrates the scenario where the cloud and the on-premises instance are connected using GoldenGate through a secured network In the above scenario, Oracle GoldenGate is installed and configured on both the cloud and the on-premises instances. On the cloud instance Oracle GoldenGate is installed and configured on the machine where the database instance can be accessed. Oracle GoldenGate can be configured for unidirectional or bi-directional replication between the cloud and on premises instances. The specific configuration details of Oracle GoldenGate processes will depend upon the option selected for establishing connectivity between the cloud and on-premises instances. The knowledge article (ID - 1588484.1) titled ' Replicating between Cloud and On-Premises using Oracle GoldenGate' discusses in detail the options for replicating between the cloud and on-premises instances. The article can be found on My Oracle Support. To learn more about Oracle GoldenGate 12c register for our launch webcast where we will go into these new features in more detail.   You may also want to download our white paper "Oracle GoldenGate 12c Release 1 New Features Overview" I would love to hear your requirements for replicating between on-premises and cloud instances, as well as your comments about the strategy discussed in the knowledge article to address your needs. Please post your comments in this blog or in the Oracle GoldenGate public forum - https://forums.oracle.com/community/developer/english/business_intelligence/system_management_and_integration/goldengate

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • Bumblebee [ERROR]Cannot access secondary GPU - error: [XORG]

    - by Lunchbox
    Though this may seem like a duplicate question, none of the suggestions I've seen have worked for me, however nearly all posters get good results. I'll start with hardware: Metabox W350ST notebook Intel Core i7 4700 16GB RAM GTX 765M (with Optimus) 128GB SSD 1TB SSHD My initial error output when trying to optirun a game is: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [133.973920] [ERROR]Aborting because fallback start is disabled. If anything else is needed to troubleshoot this just let me know. Adding bumblebee.conf: # Configuration file for Bumblebee. Values should **not** be put between quotes ## Server options. Any change made in this section will need a server restart # to take effect. [bumblebeed] # The secondary Xorg server DISPLAY number VirtualDisplay=:8 # Should the unused Xorg server be kept running? Set this to true if waiting # for X to be ready is too long and don't need power management at all. KeepUnusedXServer=false # The name of the Bumbleblee server group name (GID name) ServerGroup=bumblebee # Card power state at exit. Set to false if the card shoud be ON when Bumblebee # server exits. TurnCardOffAtExit=false # The default behavior of '-f' option on optirun. If set to "true", '-f' will # be ignored. NoEcoModeOverride=false # The Driver used by Bumblebee server. If this value is not set (or empty), # auto-detection is performed. The available drivers are nvidia and nouveau # (See also the driver-specific sections below) Driver=nvidia # Directory with a dummy config file to pass as a -configdir to secondary X XorgConfDir=/etc/bumblebee/xorg.conf.d ## Client options. Will take effect on the next optirun executed. [optirun] # Acceleration/ rendering bridge, possible values are auto, virtualgl and # primus. Bridge=auto # The method used for VirtualGL to transport frames between X servers. # Possible values are proxy, jpeg, rgb, xv and yuv. VGLTransport=proxy # List of paths which are searched for the primus libGL.so.1 when using # the primus bridge PrimusLibraryPath=/usr/lib/x86_64-linux-gnu/primus:/usr/lib/i386-linux-gnu/primus # Should the program run under optirun even if Bumblebee server or nvidia card # is not available? AllowFallbackToIGC=false # Driver-specific settings are grouped under [driver-NAME]. The sections are # parsed if the Driver setting in [bumblebeed] is set to NAME (or if auto- # detection resolves to NAME). # PMMethod: method to use for saving power by disabling the nvidia card, valid # values are: auto - automatically detect which PM method to use # bbswitch - new in BB 3, recommended if available # switcheroo - vga_switcheroo method, use at your own risk # none - disable PM completely # https://github.com/Bumblebee-Project/Bumblebee/wiki/Comparison-of-PM-methods ## Section with nvidia driver specific options, only parsed if Driver=nvidia [driver-nvidia] # Module name to load, defaults to Driver if empty or unset KernelDriver=nvidia PMMethod=auto # colon-separated path to the nvidia libraries LibraryPath=/usr/lib/nvidia-current:/usr/lib32/nvidia-current # comma-separated path of the directory containing nvidia_drv.so and the # default Xorg modules path XorgModulePath=/usr/lib/nvidia-current/xorg,/usr/lib/xorg/modules XorgConfFile=/etc/bumblebee/xorg.conf.nvidia ## Section with nouveau driver specific options, only parsed if Driver=nouveau [driver-nouveau] KernelDriver=nouveau PMMethod=auto XorgConfFile=/etc/bumblebee/xorg.conf.nouveau DRIVER VERSION - Output of jockey-text -l: nvidia_304_updates - nvidia_304_updates (Proprietary, Enabled, Not in use)

    Read the article

  • Minimum team development sizes

    - by MarkPearl
    Disclaimer - these are observations that I have had, I am not sure if this follows the philosophy of scrum, agile or whatever, but most of these insights were gained while implementing a scrum scenario. Two is a partnership, three starts a team For a while I thought that a team was anything more than one and that scrum could be effective methodology with even two people. I have recently adjusted my thinking to a scrum team being a minimum of three, so what happened to two and what do you call it? For me I consider a group of two people working together a partnership - there is value in having a partnership, but some of the dynamics and value that you get from having a team is lost with a partnership. Avoidance of a one on one confrontation The first dynamic I see missing in a partnership is the team motivation to do better and how this is delivered to individuals that are not performing. Take two highly motivated individuals and put them together and you will typically see them continue to perform. Now take a situation where you have two individuals, one performing and one not and the behaviour is totally different compared to a team of three or more individuals. With two people, if one feels the other is not performing it becomes a one on one confrontation. Most people avoid confrontations and so nothing changes. Compare this to a situation where you have three people in a team, 2 performing and 1 not the dynamic is totally different, it is no longer a personal one on one confrontation but a team concern and people seem more willing to encourage the individual not performing and express their dissatisfaction as a team if they do not improve. Avoiding the effects of Tuckman’s Group Development Theory If you are not familiar with Tuckman’s group development theory give it a read (http://en.wikipedia.org/wiki/Tuckman's_stages_of_group_development) In a nutshell with Tuckman’s theory teams go through these stages of Forming, Storming, Norming & Performing. You want your team to reach and remain in the Performing stage for as long as possible - this is where you get the most value. When you have a partnership of two and you change the individuals in the partnership you basically do a hard reset on the partnership and go back to the beginning of Tuckman’s model each time. This has a major effect on the performance of a team and what they can deliver. What I have seen is that you reduce the effects of Tuckman's theory the more individuals you have in the team (until you hit the maximum team size in which other problems kick in). While you will still experience Tuckman's theory with a team of three, the impact will be greatly reduced compared to two where it is guaranteed every time a change occurs. It's not just in the numbers, it's in the people One final comment - while the actual numbers of a team do play a role, the individuals in the team are even more important - ideally you want to keep individuals working together for an extended period. That doesn't mean that you never change the individuals in a team, or that once someone joins a team they are stuck there - there is value in an individual moving from team to team and getting cross pollination, but the period of time that an individual moves should be in month's or years, not days or weeks. Why? So why is it important to know this? Why is it important to know how a team works and what motivates them? I have been asking myself this question for a while and where I am at right now is this… the aim is to achieve the stage where the sum of the total (team) is greater than the sum of the parts (team members). This is why we form teams and why understanding how they work is a challenge and also extremely stimulating.

    Read the article

  • Admin Panel like Custom Framework

    - by bhuvin
    I want to Create a Framework , like Admin panel , which can rule almost all the aspects of what is shown on the frontend. For an (most basic) example: If suppose the links which are to be shown in a navigation area is passed from the server, with the order and the url , etc. The whole aim is to save the time on the tedious tasks. You can just start creating menus and start assigning pages to it. Give a url, actual files which are to be rendered (in case of static files.), in case of dynamic files, giving the file accordingly. And all this is fully server side manageable using different portlets, sort of things. So basic Roadmap is having : Areas like: Header Area - Which can contain logos, links etc. Navigation Area - Which can contains links and submenus. Content Area - Now this is where the tricky part is that that it has zones like: left, center & right. It contains Order in which it has to be displayed. So, when someday we want to change the way the articles appear on the page, we can do so easily, without any deployments. Now these zones can have n number of internal elements, like the word cloud, or the advertisement area. Footer Area: Again similar as Header Area. Currently there is a preexisting custom framework, which uses XSLT files for pulling out data from the server side. And it has the above capabilities. For example: If there's a grid it will be having a <table> tag embedded in the XSLT file. Now whatever might be the source of the data, we serialize this as XML and give it to the XSLT file and the html is derived from this and is appended to the layer in a page. The problem with this approach is: The XSLT conversion is occurring on the server side, so the server is responsible for getting the data, running XSLT transform, and append the html generated to the layer div. So, according to me, firstly this isn't the server's concern to do so. Secondly for larger applications this might be slower. Debugging isn't possible for XSLT transformation. So, whenever we face problems with data its always a bit of a trial & error method. Maintaining it is a bit of an eerie job i.e. styling changes, and other stuff. Adding dynamic values. Like JavaScript can't actually be very easily used in this. Secondly, we can't use JQuery or any other libraries with this since this is all occurring on the server. For now what I have thought about is using Templating - Javascript - JSON combination in place of XSLT, this will be offloaded to the client and the rendering will take place accordingly. This could solve the above problems and also could add mobile support for the same. Only problem which I could think of is that: It is much work and adding new portlets on the go needs to be looked into. What could be the alternatives for this? What kind of problems are there with the JavaScript approach? What are the different ways to implement the same? Are there any existing frameworks for similar usage?

    Read the article

  • Extreme Makeover, Phone Edition: Comcasts xfinity

    Mobile Makeover For many companies the first foray into Windows Phone 7 (WP7) may be in porting their existing mobile apps. It is tempting to simply transfer existing functionality, avoiding the additional design costs. Readdressing business needs and taking advantage of the WP7 platform can reduce cost and is essential to a successful re-launch. To better understand the advantage of new development lets examine a conceptual upgrade of Comcasts existing mobile app. Before Comcast has a great mobile app that provides several key features. The ability to browse the lineup using a guide, a client for Comcast email accounts, On Demand gallery, and much more. We will leverage these and build on them using some of the incredible WP7 features.   After With the proliferation of DVRs (Digital Video Recorders) and a variety of media devices (TV, PC, Mobile) content providers are challenged to find creative ways to build their brands. Every client touch point must provide both value added services as well as opportunities for marketing and up-sale; WP7 makes it easy to focus on those opportunities. The new app is an excellent vehicle for presenting Comcasts newly rebranded TV, Voice, and Internet services. These services now fly under the banner of xfinity and have been expanded to provide the best experience for Comcast customers. The Windows Phone 7 app will increase the surface area of this service revolution.   The home menu is simplified and highlights Comcasts Triple Play: Voice, TV, and Internet. The inbox has been replaced with a messages view, and message management is handled by a WP7 hub. The hub presents emails, tweets, and IMs from Comcast and other viewers the user follows on Twitter.  The popular view orders shows based on the users viewing history and current cable package. The first show Glee is both popular and participating in a conceptual co-marketing effort, so it receives prime positioning. The second spot goes to a hit show on a premium channel, in this example HBOs The Pacific, encouraging viewers to upgrade for this premium content. The remaining spots are ordered based on viewing history and popularity. Tapping the play button moves the user to the theatre where they can watch previews or full episodes streaming from Fancast. Tapping an extra presents the user with show details as well as interactive content that may be included as part of co-marketing efforts. Co-Marketing with Dynamic Content The success of Comcasts services are tied to the success of the networks and shows it purveys, making co-marketing efforts essential. In this concept FOX is co-marketing its popular show Glee. A customized panorama is updated with the latest gleeks tweets, streaming HD episodes, and extras featuring photos and video of the cast. If WP7 apps can be dynamically extended with web hosted .xap files, including sandboxed partner experiences would enable interactive features such as the Gleek Peek, in which a viewer can select a character from a panorama to view the actors profile. This dynamic inline experience has a tailored appeal to aspiring creatives and is technically possible with Windows Phone 7.   Summary The conceptual Comcast mobile app for Windows Phone 7 highlights just a few of the incredible experiences and business opportunities that can be unlocked with this latest mobile solution. It is critical that organizations recognize and take full advantage of these new capabilities. Simply porting existing mobile applications does not leverage these powerful tools; re-examining existing applications and upgrading them to Windows Phone 7 will prove essential to the continued growth and success of your brand.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I get public feed from facebook without user authentication on a native/Desktop app?

    - by KronoS
    I'm looking to get publicly available facebook feeds (i.e. Google's facebook page/posts). However instead of forcing the user to sign into their own facebook app, I want to be able to access these posts. I've looked into using "App Access Tokens" however since my application is a native/Desktop app (iOS, Android, WP8/Win 8) I'm not able to do this. Is there a way to get publicly accessible feeds from facebook without user authentication? I'm using the Facebook C# SDK to access facebook. Currently I'm doing the following: dynamic tokenInfo = fb.Get( String.Format( "/oauth/access_token?client_id={0}&client_secret={1}&grant_type=client_credentials", FbController.AppId, FbController.AppSecret)); var appAccessToken = (string) tokenInfo.access_token; fb = new FacebookClient(); dynamic response = fb.Get( String.Format( "/google/posts?access_token={0}", appAccessToken)); Problem is that this only works if my application is set to "web" instead of "native/Desktop". I get the following error when running this code and classified app as native/Desktop. (OAuthException - #15) (#15) Requires session when calling from a desktop app

    Read the article

  • WCF: Manually configuring Binding and Endpoint causes SerciveChannel Faulted State

    - by Matthias
    Hi there, I've created a ComVisible assembly to be used in a classic-asp application. The assembly should act as a wcf client and connect to a wcf service host (inside a windows service) on the same machine using named pipes. The wcf service host works fine with other clients, so the problem must be within this assembly. In order to get things work I added a service reference to the ComVisible assembly and proxy classes and the corresponding app.config settings were generated for me. Everything fine so far except that the app config would not be recognized when doing an CreateObject with my assembly in the asp code. I went and tried to hardcode (just for testing) the Binding and Endpoint and pass those two to the constructor of my ClientBase derived proxy using this code: private NetNamedPipeBinding clientBinding = null; private EndpointAddress clientAddress = null; clientBinding = new NetNamedPipeBinding(); clientBinding.OpenTimeout = new TimeSpan(0, 1, 0); clientBinding.CloseTimeout = new TimeSpan(0, 0, 10); clientBinding.ReceiveTimeout = new TimeSpan(0, 2, 0); clientBinding.SendTimeout = new TimeSpan(0, 1, 0); clientBinding.TransactionFlow = false; clientBinding.TransferMode = TransferMode.Buffered; clientBinding.TransactionProtocol = TransactionProtocol.OleTransactions; clientBinding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; clientBinding.MaxBufferPoolSize = 524288; clientBinding.MaxBufferSize = 65536; clientBinding.MaxConnections = 10; clientBinding.MaxReceivedMessageSize = 65536; clientAddress = new EndpointAddress("net.pipe://MyService/"); MyServiceClient client = new MyServiceClient(clientBinding, clientAddress); client.Open(); // do something with the client client.Close(); But this causes the following error: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the faulted state. The environment is .Net Framework 3.5 / C#. What am I missing here?

    Read the article

  • Selenium RC selenium-testrunner.js Access denied error on IEProxy - Help??

    - by melaos
    Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E) Timestamp: Wed, 28 Apr 2010 02:07:17 UTC Message: Access is denied. Line: 177 Char: 9 Code: 0 URI: http://www.google.com/selenium-server/core/scripts/selenium-testrunner.js Hi guys, i'm just starting to learn up on selenium and while testing using mostly test cases and test suite creating using selenium IDE firefox, i'm having some problem getting it to work properly in internet explorer. this is the cmd line that i'm using: java -jar "selenium-server.jar" -htmlSuite *iexploreproxy "http://www.google.com/" tests/OR_Discount_UAT_Suite.htm results.html -userExtensions user-extensions.js i try using the *iexplore but kept getting session id expired error and try with the proxy version instead. i can now see the testrunner but keep getting the access denied error. i then try the same cmd line using firefox: java -jar "selenium-server.jar" -htmlSuite *firefox3 "http://www.google.com/" tests/OR_Discount_UAT_Suite.htm results.html -userExtensions user-extensions.js FYI, i've already unchecked the auto detect proxy setting in IE8. and i can get everything running perfectly. so im not sure what's the problem right now :( anybody can help? thanks!

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • WPF 4.0 Custom panel won't show dynamically added controls in VS 2010 Designer

    - by Matt Ruwe
    I have a custom panel control that I'm trying to dynamically add controls within. When I run the application the static and dynamically added controls show up perfectly, but the dynamic controls do not appear within the visual studio designer. Only the controls placed declaratively in the XAML appear. I'm currently adding the dynamic control in the CreateUIElementCollection override, but I've also tried this in the constructor without success. Public Class CustomPanel1 Inherits Panel Public Sub New() End Sub Protected Overrides Function MeasureOverride(ByVal availableSize As System.Windows.Size) As System.Windows.Size Dim returnValue As New Size(0, 0) For Each child As UIElement In Children child.Measure(availableSize) returnValue.Width = Math.Max(returnValue.Width, child.DesiredSize.Width) returnValue.Height = Math.Max(returnValue.Height, child.DesiredSize.Height) Next returnValue.Width = If(Double.IsPositiveInfinity(availableSize.Width), returnValue.Width, availableSize.Width) returnValue.Height = If(Double.IsPositiveInfinity(availableSize.Height), returnValue.Height, availableSize.Height) Return returnValue End Function Protected Overrides Function ArrangeOverride(ByVal finalSize As System.Windows.Size) As System.Windows.Size Dim currentHeight As Integer For Each child As UIElement In Children child.Arrange(New Rect(0, currentHeight, child.DesiredSize.Width, child.DesiredSize.Height)) currentHeight += child.DesiredSize.Height Next Return finalSize End Function Protected Overrides Function CreateUIElementCollection(ByVal logicalParent As System.Windows.FrameworkElement) As System.Windows.Controls.UIElementCollection Dim returnValue As UIElementCollection = MyBase.CreateUIElementCollection(logicalParent) returnValue.Add(New TextBlock With {.Text = "Hello, World!"}) Return returnValue End Function Protected Overrides Sub OnPropertyChanged(ByVal e As System.Windows.DependencyPropertyChangedEventArgs) MyBase.OnPropertyChanged(e) End Sub End Class And my usage of this custom panel <Window x:Class="MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:CustomPanel" Title="MainWindow" Height="364" Width="434"> <local:CustomPanel1> <CheckBox /> <RadioButton /> </local:CustomPanel1> </Window>

    Read the article

  • Gzip http compression problem on iis7

    - by wpfwannabe
    My web hosting provider is running IIS7 and I am having loads of trouble to get gzip compression to work properly. Host admins say compression is installed. I can confirm compression using some online checking services but not with others. PageSpeed Firefox add-on also says the site is uncompressed. I am personally sitting behind a Squid proxy but web.config settings should take care of proxy issue. Below is the relevant web.config snippet. Most of it is borrowed from various sites. Any thoughts? <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" doStaticCompression="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="True" dynamicCompressionEnableCpuUsage="89" dynamicCompressionDisableCpuUsage="90" minFileSizeForComp="1" noCompressionForRange="False"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression>

    Read the article

  • Object Moved error while consuming a webservice

    - by NandaGopal
    Hi - I've a quick question and request you all to respond soon. I've developed a web service with Form based authentication as below. 1.An entry in web.config as below. 2.In Login Page user is validate on button click event as follows. if (txtUserName.Text == "test" && txtPassword.Text == "test") { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, // Ticket version txtUserName.Text,// Username to be associated with this ticket DateTime.Now, // Date/time ticket was issued DateTime.Now.AddMinutes(50), // Date and time the cookie will expire false, // if user has chcked rememebr me then create persistent cookie "", // store the user data, in this case roles of the user FormsAuthentication.FormsCookiePath); // Cookie path specified in the web.config file in <Forms> tag if any. string hashCookies = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, hashCookies); // Hashed ticket Response.Cookies.Add(cookie); string returnUrl = Request.QueryString["ReturnUrl"]; if (returnUrl == null) returnUrl = "~/Default.aspx"; Response.Redirect(returnUrl); } 3.Webservice has a default webmethod. [WebMethod] public string HelloWorld() { return "Hello World"; } 4.From a webApplication I am making a call to webservice by creating proxy after adding the webreferance of the above webservice. localhost.Service1 service = new localhost.Service1(); service.AllowAutoRedirect = false; NetworkCredential credentials = new NetworkCredential("test", "test"); service.Credentials = credentials; string hello = service.HelloWorld(); Response.Write(hello); and here while consuming it in a web application the below exception is thrown from webservice proxy. -- Object moved Object moved to here. --. Could you please share any thoughts to fix it?

    Read the article

  • Making flexible C# code in MVC2 for Stored Procedures

    - by cc0
    Thanks to Darin Dimitrov's suggestion I got a big step further in understanding good MVC code, but I'm having some problems making it flexible. I implemented Darin's suggested solution, and it works perfectly for single controllers. However I'm having some trouble implementing it with some flexibility. What I'm looking for is this; To be able to make dynamic column names in json Instead of using "Column1: 'value', ..." and "Column2: 'value', ..." inside the json, I'd like to use for example "id: 'value', ..." and "place: 'value' ..." for one stored procedure, and "animal" and "type" in another (inside the json format). To be able to make dynamic amounts of columns dependent on which stored procedure is called Some stored procedures I'll want to read more than 2 rows from, is there a smart way of accomplishing that? To be able to make numeric (floats and integers) rows from the database be presented inside the json without quotes Like this (name and age); { Column1: "John", Column2: 53 }, I would be very grateful for any feedback and suggestions / code examples I can get here. Even imperfect ones.

    Read the article

  • How do I add PHP support to Apache 2 without breaking my current installation?

    - by Hobhouse
    I run Apache 2 with WSGI (for a Django-app) on a Ubuntu box. I want to use Nagios for server monitoring, and for this purpose it seems I have to add PHP support to Apache. When I installed Apache 2, I did this: apt-get install apache2 apache2.2-common apache2-mpm-worker apache2-threaded-dev libapache2-mod-wsgi python-dev Available modules for apache2 are these: /etc/apache2/mods-available$ ls actions.conf authn_default.load cache.load deflate.conf filter.load mime.conf proxy_ftp.load suexec.load actions.load authn_file.load cern_meta.load deflate.load headers.load mime.load proxy_http.load unique_id.load alias.conf authnz_ldap.load cgi.load dir.conf ident.load mime_magic.conf rewrite.load userdir.conf alias.load authz_dbm.load cgid.conf dir.load imagemap.load mime_magic.load setenvif.conf userdir.load asis.load authz_default.load cgid.load disk_cache.conf include.load negotiation.conf setenvif.load usertrack.load auth_basic.load authz_groupfile.load charset_lite.load disk_cache.load info.conf negotiation.load speling.load version.load auth_digest.load authz_host.load dav.load dump_io.load info.load proxy.conf ssl.conf vhost_alias.load authn_alias.load authz_owner.load dav_fs.conf env.load ldap.load proxy.load ssl.load wsgi.conf authn_anon.load authz_user.load dav_fs.load expires.load log_forensic.load proxy_ajp.load status.conf wsgi.load authn_dbd.load autoindex.conf dav_lock.load ext_filter.load mem_cache.conf proxy_balancer.load status.load authn_dbm.load autoindex.load dbd.load file_cache.load mem_cache.load proxy_connect.load substitute.load What is the best way for me to add PHP support to Apache 2 without breaking my current installation and configuration?

    Read the article

  • make_tuple with boost::python under Visual Studio 9

    - by celil
    Trying to build the following simple example #include <boost/python.hpp> using namespace boost::python; tuple head_and_tail(object sequence) { return make_tuple(sequence[0],sequence[-1]); } available here, I end up with this compilation error under Visual Studio 9 error C2668: 'boost::python::make_tuple' : ambiguous call to overloaded function 1> C:\Program Files\boost_1_42_0\boost/python/detail/make_tuple.hpp(22): could be 'boost::python::tuple boost::python::make_tuple<boost::python::api::object_item,boost::python::api::object_item>(const A0 &,const A1 &)' 1> with 1> [ 1> A0=boost::python::api::object_item, 1> A1=boost::python::api::object_item 1> ] 1> C:\Program Files\boost_1_42_0\boost/tuple/detail/tuple_basic.hpp(802): or 'boost::tuples::tuple<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9> boost::tuples::make_tuple<boost::python::api::object_item,boost::python::api::object_item>(const T0 &,const T1 &)' [found using argument-dependent lookup] 1> with 1> [ 1> T0=boost::python::api::proxy<boost::python::api::item_policies>, 1> T1=boost::python::api::proxy<boost::python::api::item_policies>, 1> T2=boost::tuples::null_type, 1> T3=boost::tuples::null_type, 1> T4=boost::tuples::null_type, 1> T5=boost::tuples::null_type, 1> T6=boost::tuples::null_type, 1> T7=boost::tuples::null_type, 1> T8=boost::tuples::null_type, 1> T9=boost::tuples::null_type 1> ] Is this a bug in boost::python, or am I doing something wrong? How can I get the above program to compile?

    Read the article

  • Using FluentValidation with Castle Windsor and Entity Framework 4.0 (POCO) in MVC2

    - by Brian McCord
    This isn't a very simple question, but hopefully someone has run across it. I am trying to get the following things working together: MVC2 FluentValidation Entity Framework 4.0 (POCO) Castle Windsor I've pretty much gotten everything working. I have Castle Windsor implemented and working with the Controllers being served up by the WindsorControllerFactory that is part of MVCContrib. I also have Castle serving up the FluentValidation validators as is described by this article: http://www.jeremyskinner.co.uk/2010/02/22/using-fluentvalidation-with-an-ioc-container/ My problem comes in when I try to use Html.EditorForModel or EditorFor on a view. When I try to do that I get this error message: No component for supporting the service FluentValidation.IValidator`1[[System.Data.Entity.DynamicProxies.State_71C51A42554BA6C3CF05105DA05435AD209602C217FC4C34CA52ACEA2B06B99B, EntityFrameworkDynamicProxies-BrindleyInsurance.BusinessObjects, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]] was found This is due to using the POCO generation on Entity Framework 4.0. At runtime, the generated classes get wrapped with a Dynamic Proxy so tracking and lazy loading can happen. Apparently, when using EditorForModel or EditorFor, it tries to ask Windsor to create a validator for the dynamic proxy type instead of the underlying real type. Does anyone know what I can do to solve this issue?

    Read the article

  • Making Ninject Interceptors work with async methods

    - by captncraig
    I am starting to work with ninject interceptors to wrap some of my async code with various behaviors and am having some trouble getting everything working. Here is an interceptor I am working with: public class MyInterceptor : IInterceptor { public async void Intercept(IInvocation invocation) { try { invocation.Proceed(); //check that method indeed returns Task await (Task) invocation.ReturnValue; RecordSuccess(); } catch (Exception) { RecordError(); invocation.ReturnValue = _defaultValue; throw; } } This appears to run properly in most normal cases. I am not sure if this will do what I expect. Although it appears to return control flow to the caller asynchronously, I am still a bit worried about the possibility that the proxy is unintentionally blocking a thread or something. That aside, I cannot get the exception handling working. For this test case: [Test] public void ExceptionThrown() { try { var interceptor = new MyInterceptor(DefaultValue); var invocation = new Mock<IInvocation>(); invocation.Setup(x => x.Proceed()).Throws<InvalidOperationException>(); interceptor.Intercept(invocation.Object); } catch (Exception e) { } } I can see in the interceptor that the catch block is hit, but the catch block in my test is never hit from the rethrow. I am more confused because there is no proxy or anything here, just pretty simple mocks and objects. I also tried something like Task.Run(() => interceptor.Intercept(invocation.Object)).Wait(); in my test, and still no change. The test passes happily, but the nUnit output does have the exception message. I imagine I am messing something up, and I don't quite understand what is going on as much as I think I do. Is there a better way to intercept an async method? What am I doing wrong with regards to exception handling?

    Read the article

  • cpptask ordering of static libraries in gcc command line

    - by AC
    How do I force cpptask to move the static libraries to the end on arg list issued to the compiler? Here is the clause I am using <cpptasks:cc description="appname" subsystem="console" objdir="obj" outfile="dist/app_test"> <compiler refid="testsslcc" /> <linkerarg value="-L${libdir}" /> <linkerarg value="-L/usr/local/devl/lib" /> <linkerarg value="-Wl,-rpath,../lib" /> <libset libs="unittest ${libs} dsg readline ncurses gcov" /> <fileset dir="test/obj" includes="main.o" /> <fileset dir="." includes="${TCFILES}" /> <fileset dir="../lib" includes="libboost_thread.a libboost_date_time.a" /> </cpptasks:cc> when this executes, libboost_thread.a libboost_date_time.a are first files in the argument list passed the compiler, gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k ../../lib/libboost_date_time.a ../../lib/libboost_thread.a x.cpp ... which causes compiler error. By manually moving them to the end of the argument list, the application compiles without error. gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k x.cpp ... ../../lib/libboost_date_time.a ../../lib/libboost_thread.a And yes I have tried changing the order in the xml, and that of course didn't work. For now I am using an exec task to call gcc with the files in the correct order but this of course is a hack.

    Read the article

  • How can one detect if a server/script is accessing their site through cURL/file_get_contents()? (excluding user-agents and IP addresses)

    - by navnav
    I've come across a question where a user is having difficulties accessing an image through a script (using cURL/file_get_contents()): How to save an image from url using PHP? The image link seems to return a 403 error when using file_get_contents() to request it. But in cURL, a more detailed error is returned: You were denied access to the system. Turn off the engine or Surf Proxy, Fake IP if you really want to access. Proxy or not accepted from any Web tools Intrusion Prevention System. Binh Minh Online Data Services @ 2008 - 2012 I also failed to access the same image after fiddling around with a cURL request myself. I tried changing the user-agent to my exact browsers user-agent which can successfully access the image. I've also tried the script on my personal local server, which (obviously) uses the same IP address as my browser... So as far as I know, user-agents and IP addresses are out of the situation. How else can someone detect a script performing a request? BTW, this is not for anything crazy. I'm just curious xD

    Read the article

  • How to return DropDownList selections dynamically in C#?

    - by salvationishere
    This is probably a simple question but I am developing a web app in C# with DropDownList. Currently it is working for just one DropDownList. But now that I modified the code so that number of DropDownLists that should appear is dynamic, it gives me error; "The name 'ddl' does not exist in the current context." The reason for this error is that there a multiple instances of 'ddl' = number of counters. So how do I instead return more than one 'ddl'? Like what return type should this method have instead? And how do I return these values? Reason I need it dynamic is I need to create one DropDownList for each column in whatever Adventureworks table they select. private DropDownList CreateDropDownLists() { for (int counter = 0; counter < NumberOfControls; counter++) { DropDownList ddl = new DropDownList(); SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); ddl.ID = "DropDownListID" + (counter + 1).ToString(); int NumControls = targettable.Length; DataTable dt = new DataTable(); dt.Load(dr2); ddl.DataValueField = "COLUMN_NAME"; ddl.DataTextField = "COLUMN_NAME"; ddl.DataSource = dt; ddl.ID = "DropDownListID 1"; ddl.SelectedIndexChanged += new EventHandler(ddlList_SelectedIndexChanged); ddl.DataBind(); ddl.AutoPostBack = true; ddl.EnableViewState = true; //Preserves View State info on Postbacks //ddlList.Style["position"] = "absolute"; //ddl.Style["top"] = 80 + "px"; //ddl.Style["left"] = 0 + "px"; dr2.Close(); } return ddl; }

    Read the article

  • Import error ft2font from matplotlib (python, macosx)

    - by Tomas K
    I was installing matplotlib to use basemap today when I had to install a lot of stuff to make it work. After installing matplotlib and be able to import it I installed basemap but I can't import basemap because of this error: from mpl_toolkits.basemap import Basemap Traceback (most recent call last): File "", line 1, in File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/mpl_toolkits/basemap/init.py", line 36, in from matplotlib.collections import LineCollection File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/collections.py", line 22, in import matplotlib.backend_bases as backend_bases File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 38, in import matplotlib.widgets as widgets File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/widgets.py", line 16, in from lines import Line2D File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/lines.py", line 23, in from matplotlib.font_manager import FontProperties File "/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/font_manager.py", line 52, in from matplotlib import ft2font ImportError: dlopen(/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/ft2font.so, 2): Symbol not found: _FT_Attach_File Referenced from: /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/ft2font.so Expected in: dynamic lookup So when I tried to import ft2font in python by: from matplotlib import ft2font I got this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: dlopen(/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/ft2font.so, 2): Symbol not found: _FT_Attach_File Referenced from: /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/matplotlib/ft2font.so Expected in: dynamic lookup Any idea what to do? I'm using Mac OSX 10.6 and python 2.7.2 installed by homebrew.

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >