Search Results

Search found 19308 results on 773 pages for 'network efficiency'.

Page 630/773 | < Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >

  • Privacy setting using Graph API

    - by Anthony
    Hello, I'm currently trying to se privacy setting on post I do through Graph API, code is: function graphStreamPublish(){ message1 = document.getElementById('message').value; FB.api('/me/feed', 'post', { message: message1 }, privacy: {value: "CUSTOM", friends: "SOME_FRIENDS", network: "1", allow: "204506204", deny: "515592311", function(response) { if (response && response.post_id) { alert('Post was not published.'); } else { alert('Post was published.'); } }); } Then I just call this function for the things I right on textarea: <center><textarea id="message" cols="50" rows="5">Test goes here!</textarea></center> <br /> <center><a href="" onclick="graphStreamPublish(); return false;">Post message now!</a></center> This does not work however woth the privacy part but works fine without. Am i doing something wrong? Thanks.

    Read the article

  • Ad-Hoc mode and Cell (how does it work)?

    - by Ori Cohen
    hello, I am setting up a wireless Ad-Hoc network manually in linux using iwconfig and ifconfig. I have gotten everything working, and I think I understand it all except access point. The part that confused me is this: If I have essid='some_id', channel=11, Mode=Ad-Hoc and appropriate routing/ip on both laptops, why do I need to make sure the Cell is the same? I was under the impression that Ad-Hoc worked independently of the access point. I can easily get around this, I just use iwconfig wlan0 ap 00:00:00:00:00:01 Why is this necessary? I've been unable to find a good tutorial on this. If anyone can clear this up I'd appreciate it. Thanks

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • default webmail url workaround

    - by jan
    Hi, Is there a way or at least a workaround on masking default webmail urls or disabling access webmail urls so users will not be able to change their passwords? Website is PHP based and is using apache server under a shared hosting account. The thing is that http://domain.com/webmail will let users access the main panel where they can change their individual passwords. We do not need this. Most solutions point to changing httpd.conf which we are not allowed to change since this is on a shared hosting service. I'm looking for at least a workaround to this issue. How about disabling it through their browsers if my client is under a network server, this would be a decent workaround isn't it? or are there any more suggestions aside from this? Please help. This is my urgent issue. Thank you very much!

    Read the article

  • Algorithmic trading software safety guards

    - by Adal
    I'm working on an automatic trading system. What sorts of safe-guards should I have in place? The main idea I have is to have multiple pieces checking each other. I will have a second independent little process which will also connect to the same trading account and monitor simple things, like ensuring the total net position does not go over a certain limit, or that there are no more than N orders in 10 minutes for example, or more than M positions open simultaneously. You can also check that the actual open positions correspond to what the strategy process thinks it actually holds. As a bonus I could run this checker process on a different machine/network provider. Besides the checks in the main strategy, this will ensure that whatever weird bug occurs, nothing really bad can happen. Any other things I should monitor and be aware of?

    Read the article

  • Restarting service from a client computer without rights

    - by Jason
    I have already created the program to restart a SQL database but it only works if the client has the rights. This is going to be done on a local network from a client computer when they can't get a person that has the password on the phone. Any thoughts I'm currently using the servicecontroller to start and stop database. When I don't have the rights I get a access denied error, or This operation might require other privileges. Not sure if impersonation would work since I don't have the userid and password.

    Read the article

  • count of distinct acyclic paths from A[a,b] to A[c,d]?

    - by Sorush Rabiee
    I'm writing a sokoban solver for fun and practice, it uses a simple algorithm (something like BFS with a bit of difference). now i want to estimate its running time ( O and omega). but need to know how to calculate count of acyclic paths from a vertex to another in a network. actually I want an expression that calculates count of valid paths, between two vertices of a m*n matrix of vertices. a valid path: visits each vertex 0 or one times. have no circuits for example this is a valid path: but this is not: What is needed is a method to find count of all acyclic paths between the two vertices a and b. comments on solving methods and tricks are welcomed.

    Read the article

  • Importing large datasets on iPhone using CoreData

    - by Matthes
    Hi there, I'm facing very annoying problem. My iPhone app is loading it's data from a network server. Data are sent as plist and when parsed, it neeeds to be stored to SQLite db using CoreData. Issue is that in some cases those datasets are too big (5000+ records) and import takes way too long. More on that, when iPhone tries to suspend the screen, Watchdog kills the app because it's still processing the import and does not respond up to 5 seconds, so import is never finished. I used all recommended techniques according to article "Efficiently Importing Data" http://developer.apple.com/mac/library/DOCUMENTATION/Cocoa/Conceptual/CoreData/Articles/cdImporting.html and other docs concerning this, but it's still awfully slow. Solution I'm looking for is to let app suspend, but let import run in behind (better one) or to prevent attempts to suspend the app at all. Or any better idea is welcomed too. Any tips on how to overcome these issues are highly appreciated! Thanks

    Read the article

  • Storing And Using Microsoft User Account Credentials in MS SQL Srv 2008 Database

    - by instantmusic
    I'm not exactly positive how to word this for the sake of the title so please forgive me. Also I can't seem to figure out how to even google this question, so I'm hoping that I can get a lead in the right direction. Part of my software(VB.NET App) requires the ability to access/read/write a shared network folder. I have an option for the user to specify any credentials that might be needed to access said folder. I want to store these credentials given in the MS SQL Database as part of the config(I have a table which contains configuration). My concern is that the password for the user account will be unencrpyted. Yet, if I encrypt the password the VB.NET App And/Or database will be unable to use the credentials for file i/o operations unless the Password is unencrypted before use. I'm fishing for suggestions on how to better handle this situation.

    Read the article

  • Access block level storage via kernel

    - by N 1.1
    How to access block level storage via the kernel (w/o using scsi libraries)? My intent is to implement a block level storage protocol over network for learning purpose, almost the same way SCSI works. Requests will be generated by initiator and sent to target (both userspace program) which makes call to kernel module and returns the data using TCP protocol to initiator. So far, I have managed to build a simple "Hello" module and run it (I am new at kernel programming), but unable to proceed with block access. After searching a lot, I found struct buffer_head * bread(int dev,int block) in linux/fs.h, but the compiler throws error. error: implicit declaration of function ‘bread’ Please help, also feel free to advice on starting with kernel programming. Thank you!

    Read the article

  • How should I measure Concurrent Licence Usage

    - by Andrew Wood
    Hi I have detailed stats on user access to my system detailing login and logout times as well as machine used, network username etc. I am in need of measuring what I would term a concurrent user licences level based on this information. Now I could take the maximum logged in for any 1 day in a 3 month period say 170 or I could take the average say 133. Does anyone have or know of a formula for working this out or is it as simple as the high water mark which is 170 in my example. A client has recently gone from an unlimited licence to a concurrent licence so I am faced with the task of setting the initial licence level. There is potential for more licence sales in the future so I don't want it set to high and I do want it based on historical data that the system collects rather than guess work.

    Read the article

  • Android v1.5 w/ browser data storage

    - by Sirber
    I'm trying to build an offline web application which can sync online if the network is available. I tryed jQuery jStore but the test page stop at "testing..." whitout result, then I tryed Google Gears which is supposed to be working on the phone but it gears is not found. if (window.google && google.gears) { google.gears.factory.getPermission(); // Database var db = google.gears.factory.create('beta.database'); db.open('cominar-compteurs'); db.execute('create table if not exists Lectures' + ' (ID_COMPTEUR int, DATE_HEURE timestamp, kWh float, Wmax float, VAmax float, Wcum float, VAcum float);'); } else { alert('Google Gears non trouvé.'); } the code does work on Google Chrome v5.

    Read the article

  • free switch : what is tls_port ?

    - by kiruthika
    Hi all, I am beginner to free switch.I have gone through the configuration file vars.xml in free switch. In this I have seen the following configurations. <X-PRE-PROCESS cmd="set" data="internal_auth_calls=true"/> <X-PRE-PROCESS cmd="set" data="internal_sip_port=5070"/> <X-PRE-PROCESS cmd="set" data="internal_tls_port=5071"/> <X-PRE-PROCESS cmd="set" data="internal_ssl_enable=false"/> <X-PRE-PROCESS cmd="set" data="internal_ssl_dir=$${base_dir}/conf/ssl"/> In the above I am having the doubt with tls_port. What is the use of tls_port .I have searched about this in net and I have read that tls protocol is used for secure data transfer in network. So please explain me about the communication in freeswitch. Thanks in advance.

    Read the article

  • Centralised/shared COM DLL, possible?

    - by vikp
    Hi, We have a system that makes a use of 3rd party COM DLL written in vba We have a centralised web application and 1-50 client machines that must reference that COM DLL in order to use our centralised web application. The COM DLL is going to be updated rapidly in the future, which means that it has to be re-installed on every machine manually. Is it possible to centralise this COM DLL somwhere on the network? Is there any other alternatives? Otherwise the maintenance overhead will be huge... Thank you

    Read the article

  • can I read exactly one UDP packet off a socket?

    - by Brian Palmer
    Using UNIX socket APIs on Linux, is there any way to guarantee that I read one UDP packet, and only one UDP packet? I'm currently reading packets off a non-blocking socket using recvmsg, with a buffer size a little larger than the MTU of our internal network. This should ensure that I can always receive the full UDP packet, but I'm not sure I can guarantee that I'll never receive more than one packet per recvmsg call, if the packets are small. The recvmsg man pages reference the MSG_WAITALL option, which attempts to wait until the buffer is filled. We're not using this, so does that imply that recvmsg will always return after one datagram is read? Is there any way to guarantee this? Ideally I'd like a cross-UNIX solution, but if that doesn't exist is there something Linux specific?

    Read the article

  • Get country location of an IP with native PHP

    - by Mint
    Read on before you say this is a duplicate, it's not. (as far as I could see) I want to get the county code in php from the client. Yes I know you can do this using external sites or with the likes of "geoip_record_by_name" but I don't want to be dependent on an external site, and I can't install "pear" for php as im using shard Dreamhost hosting. I thought I could just do something like this: $output = shell_exec('whois '.$ip.' -H | grep country | awk \'{print $2}\''); echo "<pre>$output</pre>"; But dreamhost seems to have an old version of whois (4.7.5), so I get this error on allot of IPs: Unknown AS number or IP network. Please upgrade this program. So unless someone knows how to get a binary of a newer version of whois onto dreamhost im stuck. Or is there another way I could get the country code from the client who is loading the page?

    Read the article

  • Socket : can an asynchronous Receive returns without reading all the bytes I asked for?

    - by NorthWind
    Hi; I was reading an article on Vadym Stetsiak's blog about how to transfer variable length messages with async sockets (url: http://vadmyst.blogspot.com/2008/03/part-2-how-to-transfer-fixed-sized-data.html). He says : What to expect when multiple messages arrive at the server? While dealing with multiple messages one has to remember that receive operation can return arbitrary number of bytes being read from the net. Typically that size is from 0 to specified buffer length in the Receive or BeginReceive methods. So, even if I tell BeginReceive to read 100 bytes, it may read less than that and returns??? I am developing a network-enabled software (TCP/IP), and I always receive the same exact number of bytes I asked for. I don't even understand the logic : why would Receive completes asynchronously if it didn't get every byte I asked for ... just keep waiting. Maybe it has something to do with IP vs TCP? Thank you for your help.

    Read the article

  • Error Messaged and Error Code design

    - by Ved
    We are designing set of web services which will return XML string in response. These are RESTFul services so I will have to send exception inside element. I am planing to design set of Error code which can help me determine where level occured just by looking at the code. For Example 1000 - Application Level 2000 - DB level 3000 - Network level so if I have error message then I can know right away that this was an application level error and it came from 1st business module. I am not very experience in this so I would love to here your thoughts and criticism. Thanks

    Read the article

  • Proxy calls across a DMZ

    - by John
    We need to determine a quick way for our web application deployed in a DMZ to communicate to our SQL server that lives in the protected network. Only port 80 is open and available, and no direct SQL traffic is allowed across the firewall. So take the following simple system. A web page (default.aspx) makes a call (string GetData()) that resides in an assembly (Simple.DLL). GetData() uses ADO.NET to open a connection, execute a SQL call, retrieve the data, and return the data to the caller. However, since only port 80 is available and no SQL traffic is allowed, what could we do to accomplish our goal? I believe a .NET remoting solution would work, and I have heard of an architecture where a remoting layer proxies the call from Simple.DLL in the DMZ to another Simple.DLL that runs on the protected side. The remoting layer handles the communication between the two DLL’s. Can someone shed some light on how WCF/remoting can help us and how to get started with a solution?

    Read the article

  • Download estimator control using JavaScript and Ajax

    - by Anil Namde
    I would like to implement the download estimator using the JavaScript and the Ajax. I have gone trough Google to find the existing implementations for the download estimator and i found most of the time asking user bandwidth and then calculating the number is strategy. It good approach and there is hardly anything on reliable to get the estimated time right. What i would like to try is use Ajax to request file size 100KB - 200 KB and do the maths get the number and update the display. Now this is surrounded with so many questions like network, number of packets formed, proxies etc ? These all factors are sufficient to turn down the approach. But THIS IS HOW I HAVE TO DO THIS ? Now i would like here inputs from you all to make it better (as good discussion)? what all can be added to this ? Can we get to know bandwidth user using without asking ?

    Read the article

  • Geolocation under firefox 3.6 requires Proxy Authentication?

    - by prem
    I am trying to share my location on geolocation enabled pages from firefox 3.6, but doesn't seem to get any kind of success or failures. When I wrote my custom js containing navigator.geolocation.getCurrentPosition(func1,func2), the success callback isn't called at all. When I tamper the http requests on firefox, the request to https://www.google.com/loc/json returns with status: 407 [Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. )]... Yes, my network is behind a proxy server. But the same works with Chrome. I didn't try other browsers yet.

    Read the article

  • Subscribing to MSMQ over the internet

    - by Nathan Palmer
    I haven't been able to find a clear answer to this problem. Is there a good way to subscribe to a MSMQ through the internet? Ideally I need security both in authentication and encryption for this connection. But I would like the subscriber to act just like any other client that would be subscribed on the local network. I believe I have a couple of options here Expose the MSMQ ports publicly Put the MSMQ behind some type of WCF service (not sure if that works for a subscriber) What other options do I have? We're sitting in a .NET environment and the main problem domain that is trying to be solved is to change the remote connections from a pulling system to an event based system to reduce the load on the main server.

    Read the article

  • What 'best practices' exist for handing enum heirarchies?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing enum heirarchies. I'm working through some docs on Entity Framework 4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • Using java to send/receive different objects through UDP

    - by AAA
    Hello everyone, I am writing a program in Java where there are communications between two or more machines using UDP. My application sends objects after serializing them through the network to the other machine where it will be deserialized and dealt with it. I was successful in sending one kind of objects so far. My problem is that I want the sender to be able to send different kind of objects, and for the receiver to be able to receive them and cast them again to their appropriate types. However, since UDP allocates a byte buffer then receive the data into the buffer, it is impossible to cast or detect the type of the received object as different objects have different sizes. Is there is a way that I can use to send different kind of objects using UDP and then receive them at the other end? (I don't ask for code here, just some ideas) Thanks

    Read the article

  • Logging from multiple apps/processes to a single log file

    - by Andrew
    Our app servers (weblogic) all use log4j to log to the same file on a network share. On top of this we have all web apps in a managed server logging errors to a common error.log. I can't imagine this is a good idea but wanted to hear from some pros. I know that each web app has its own classloader, so any thread synchronization only occurs within the app. So what happens when multiple processes start converging on a single log file? Can we expect interspersed log statements? Performance problems? What about multiple web apps logging to a common log file? The environment is Solaris.

    Read the article

< Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >